Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
Problem:
Certain Insights Advisor features differentiate between RHEL and OCP advisor
Goal:
Address top priority UI misalignments between RHEL and OCP advisor. Address UI features dropped from Insights ADvisor for OCP GA.
Scope:
Specific tasks and priority of them tracked in https://issues.redhat.com/browse/CCXDEV-7432
This contains all the Insights Advisor widget deliverables for the OCP release 4.11.
Scope
It covers only minor bug fixes and improvements:
Scenario: Check if the Insights Advisor widget in the OCP WebConsole UI shows the time of the last data analysis Given: OCP WebConsole UI and the cluster dashboard is accessible And: CCX external data pipeline is in a working state And: administrator A1 has access to his cluster's dashboard And: Insights Operator for this cluster is sending archives When: administrator A1 clicks on the Insights Advisor widget Then: the results of the last analysis are showed in the Insights Advisor widget And: the time of the last analysis is shown in the Insights Advisor widget
Acceptance criteria:
max_over_time(timestamp(changes(insightsclient_request_send_total\{status_code="202"}[1m]) > 0)[24h:1m])
Show the error message (mocked in CCXDEV-5868) if the Prometheus metrics `cluster_operator_conditions{name="insights"}` contain two true conditions: UploadDegraded and Degraded at the same time. This state occurs if there was an IO archive upload error = problems with the pipeline.
Expected for 4.11 OCP release.
Cloning the existing rule should end up with a new rule in the same namespace.
Modifications can now be done to the new rule.
(Optional) You can silence the existing rule.
Create a new PrometheusRule object inside the namespace that includes the metrics you need to form the alerting rule.
CMO should reconcile the platform Prometheus configuration with the alert-relabel-config resources.
DoD
CMO should reconcile the platform Prometheus configuration with the AlertingRule resources.
DoD
Managing PVs at scale for a fleet creates difficulties where "one size does not fit all". The ability for SRE to deploy prometheus with PVs and have retention based an on a desired size would enable easier management of these volumes across the fleet.
The prometheus-operator exposes retentionSize.
Field | Description |
---|---|
retentionSize | Maximum amount of disk space used by blocks. Supported units: B, KB, MB, GB, TB, PB, EB. Ex: 512MB. |
This is a feature request to enable this configuration option via CMO cluster-monitoring-config ConfigMap.
Today, all configuration for setting individual, for example, routing configuration is done via a single configuration file that only admins have access to. If an environment uses multiple tenants and each tenant, for example, has different systems that they are using to notify teams in case of an issue, then someone needs to file a request w/ an admin to add the required settings.
That can be bothersome for individual teams, since requests like that usually disappear in the backlog of an administrator. At the same time, administrators might get tons of requests that they have to look at and prioritize, which takes them away from more crucial work.
We would like to introduce a more self service approach whereas individual teams can create their own configuration for their needs w/o the administrators involvement.
Last but not least, since Monitoring is deployed as a Core service of OpenShift there are multiple restrictions that the SRE team has to apply to all OSD and ROSA clusters. One restriction is the ability for customers to use the central Alertmanager that is owned and managed by the SRE team. They can't give access to the central managed secret due to security concerns so that users can add their own routing information.
Provide a new API (based on the Operator CRD approach) as part of the Prometheus Operator that allows creating a subset of the Alertmanager configuration without touching the central Alertmanager configuration file.
Please note that we do not plan to support additional individual webhooks with this work. Customers will need to deploy their own version of the third party webhooks.
Team A wants to send all their important notifications to a specific Slack channel.
* CI - CI is running, tests are automated and merged.
* Release Enablement <link to Feature Enablement Presentation>
* DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
* DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
* DEV - Downstream build attached to advisory: <link to errata>
* QE - Test plans in Polarion: <link or reference to Polarion>
* QE - Automated tests merged: <link or reference to automated tests>
* DOC - Downstream documentation merged: <link to meaningful PR>
Now that upstream supports AlertmanagerConfig v1beta1 (see MON-2290 and https://github.com/prometheus-operator/prometheus-operator/pull/4709), it should be deployed by CMO.
DoD:
DoD
DoD
Copy/paste from [_https://github.com/openshift-cs/managed-openshift/issues/60_]
Which service is this feature request for?
OpenShift Dedicated and Red Hat OpenShift Service on AWS
What are you trying to do?
Allow ROSA/OSD to integrate with AWS Managed Prometheus.
Describe the solution you'd like
Remote-write of metrics is supported in OpenShift but it does not work with AWS Managed Prometheus since AWS Managed Prometheus requires AWS SigV4 auth.
Describe alternatives you've considered
There is the workaround to use the "AWS SigV4 Proxy" but I'd think this is not properly supported by RH.
https://mobb.ninja/docs/rosa/cluster-metrics-to-aws-prometheus/
Additional context
The customer wants to use an open and portable solution to centralize metrics storage and analysis. If they also deploy to other clouds, they don't want to have to re-configure. Since most clouds offer a Prometheus service (or it's easy to self-manage Prometheus), app migration should be simplified.
The cluster monitoring operator should allow OpenShift customers to configure remote write with all authentication methods supported by upstream Prometheus.
We will extend CMO's configuration API to support the following authentications with remote write:
Customers want to send metrics to AWS Managed Prometheus that require sigv4 authentication (see https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-secure-metric-ingestion.html#AMP-secure-auth).
Prometheus and Prometheus operator already support sigv4 authentication for remote write. This should be possible to configure the same in the CMO configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write.endpoint"
sigv4:
accessKey:
name: aws-credentialss
key: access
secretKey:
name: aws-credentials
key: secret
profile: "SomeProfile"
roleArn: "SomeRoleArn"
DoD:
Prometheus and Prometheus operator already support custom Authorization for remote write. This should be possible to configure the same in the CMO configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
prometheusK8s:
remoteWrite:
- url: "https://remote-write.endpoint"
Authorization:
type: Bearer
credentials:
name: credentials
key: token
DoD:
As WMCO user, I want to make sure containerd logging information has been updated in documents and scripts.
Configure audit logging to capture login, logout and login failure details
TODO(PM): update this
Customer who needs login, logout and login failure details inside the openshift container platform.
I have checked for this on my test cluster but the audit logs do not contain any user name specifying login or logout details. For successful logins or logout, on CLI and openshift console as well we can see 'Login successful' or 'Invalid credentials'.
Expected results: Login, logout and login failures should be captured in audit logging.
The apiserver pods today have ´/var/log/<kube|oauth|openshift>-apiserver` mounted from the host and create audit files there using the upstream audit event format (JSON lines following https://github.com/kubernetes/apiserver/blob/92392ef22153d75b3645b0ae339f89c12767fb52/pkg/apis/audit/v1/types.go#L72). These events are apiserver specific, but as oauth authentication flow events are also requests, we can use the apiserver event format to log logins, login failures and logouts. Hence, we propose to make oauth-server to create /var/log/oauth-server/audit.log files on the master nodes using that format.
When the login flow does not finish within a certain time (e.g. 10min), we can artificially create an event to show a login failure in the audit logs.
Right now there's no way to generate audit logs from this.
Right now there's no way to generate audit logs from this.
Let the Cluster Authentication Operator deliver the policy to OAuthServer.
In order to know if authn events should be logged, OAuthServer needs to be aware of it.
* Stanislav LaznickaCreate an observer to deliver the audit policy to the oauth server
Make the authentication-operator react to the new audit field in the oauth.config/cluster object. Write an observer watching this field, such an observer will translate the top-level configuration into oauth-server config and add it to the rest of the observed config.
OCP/Telco Definition of Done
Feature Template descriptions and documentation.
Early customer feedback is that they see SNO as a great solution covering smaller footprint deployment, but are wondering what is the evolution story OpenShift is going to provide where more capacity or high availability are needed in the future.
While migration tooling (moving workload/config to new cluster) could be a mid-term solution, customer desire is not to include extra hardware to be involved in this process.
For Telecommunications Providers, at the Far Edge they intend to start small and then grow. Many of these operators will start with a SNO-based DU deployment as an initial investment, but as DUs evolve, different segments of the radio spectrum are added, various radio hardware is provisioned and features delivered to the Far Edge, the Telecommunication Providers desire the ability for their Far Edge deployments to scale up from 1 node to 2 nodes to n nodes. On the opposite side of the spectrum from SNO is MMIMO where there is a robust cluster and workloads use HPA.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
This is a ticket meant to track all the all the OCP PRs that are involved in the implementation of the SNO + workers enhancement
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift/builder to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a deployer, I want to be able to:
so that I can achieve
Currently the Assisted Service generates the credentials by running the ignition generation step of the oepnshift-installer. This is why the credentials are only retrievable from the REST API towards the end of the installation.
In the BILLI usage, which takes down assisted service before the installation is complete there is no obvious point at which to alert the user that they should retrieve the credentials. This means that we either need to:
This requires/does not require a design proposal.
This requires/does not require a feature gate.
The AWS-specific code added in OCPPLAN-6006 needs to become GA and with this we want to introduce a couple of Day2 improvements.
Currently the AWS tags are defined and applied at installation time only and saved in the infrastructure CRD's status field for further operator use, which in turn just add the tags during creation.
Saving in the status field means it's not included in Velero backups, which is a crucial feature for customers and Day2.
Thus the status.resourceTags field should be deprecated in favour of a newly created spec.resourceTags with the same content. The installer should only populate the spec, consumers of the infrastructure CRD must favour the spec over the status definition if both are supplied, otherwise the status should be honored and a warning shall be issued.
Being part of the spec, the behaviour should also tag existing resources that do not have the tags yet and once the tags in the infrastructure CRD are changed all the AWS resources should be updated accordingly.
On AWS this can be done without re-creating any resources (the behaviour is basically an upsert by tag key) and is possible without service interruption as it is a metadata operation.
Tag deletes continue to be out of scope, as the customer can still have custom tags applied to the resources that we do not want to delete.
Due to the ongoing intree/out of tree split on the cloud and CSI providers, this should not apply to clusters with intree providers (!= "external").
Once confident we have all components updated, we should introduce an end2end test that makes sure we never create resources that are untagged.
After that, we can remove the experimental flag and make this a GA feature.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
List any affected packages or components.
RFE-1101 described user defined tags for AWS resources provisioned by an OCP cluster. Currently user can define tags which are added to the resources during creation. These tags cannot be updated subsequently. The propagation of the tags is controlled using experimental flag. Before this feature goes GA we should define and implement a mechanism to exclude any experimental flags. Day2 operations and deletion of tags is not in the scope.
RFE-2012 aims to make the user-defined resource tags feature GA. This means that user defined tags should be updatable.
Currently the user-defined tags during install are passed directly as parameters of the Machine and Machineset resources for the master and worker. As a result these tags cannot be updated by consulting the Infrastructure resource of the cluster where the user defined tags are written.
The MCO should be changed such that during provisioning the MCO looks up the values of the tags in the Infrastructure resource and adds the tags during creation of the EC2 resources. The MCO should also watch the infrastructure resource for changes and when the resource tags are updated it should update the tags on the EC2 instances without restarts.
Acceptance Criteria:
Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.
Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.
See Operators & STS slide deck.
The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.
This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.
This Section: High-Level description of the Market Problem ie: Executive Summary
This Section: Articulates and defines the value proposition from a users point of view
This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.
As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.
Acceptance Criteria:
Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.
OCP/Telco Definition of Done
Feature Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Feature --->
<--- Remove the descriptive text as appropriate --->
Problem
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
Running the OPCT with the latest version (v0.1.0) on OCP 4.11.0, the openshift-tests is reporting an incorrect counter for the "total" field.
In the example below, after the 1127th test, the total follows the same counter of executed. I also would assume that the total is incorrect before that point as the test continues the execution increases both counters.
openshift-tests output format: [failed/executed/total]
started: (0/1126/1127) "[sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (38s) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1127/1127) "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (6.6s) 2022-08-09T17:12:21 "[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: (0/1128/1128) "[sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s]" skip [k8s.io/kubernetes@v1.24.0/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support GenericEphemeralVolume -- skipping Ginkgo exit error 3: exit with code 3 skipped: (400ms) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1129/1129) "[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information [Suite:openshift/conformance/parallel] [Suite:k8s]"
OPCT output format [executed/total (failed failures)]
Tue, 09 Aug 2022 14:12:13 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1112/1127 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:23 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1120/1127 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:33 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1139/1139 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:43 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1185/1185 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor... Tue, 09 Aug 2022 14:12:53 -03> Global Status: running JOB_NAME | STATUS | RESULTS | PROGRESS | MESSAGE openshift-conformance-validated | running | | 1188/1188 (0 failures) | status=running openshift-kube-conformance | complete | | 352/352 (0 failures) | waiting for post-processor...
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Create a PR in openshift/cluster-ingress-operator to implement configurable router probe timeouts.
The PR should include the following:
User Story: As a customer in a highly regulated environment, I need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that I can ensure additional DNS traffic and data privacy.
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
In OCP 4.8 the router was changed to use the "random" balancing algorithm for non-passthrough routes by default. It was previously "leastconn".
Bug https://bugzilla.redhat.com/show_bug.cgi?id=2007581 shows that using "random" by default incurs significant memory overhead for each backend that uses it.
PR https://github.com/openshift/cluster-ingress-operator/pull/663
reverted the change and made "leastconn" the default again (OCP 4.8 onwards).
The analysis in https://bugzilla.redhat.com/show_bug.cgi?id=2007581#c40 shows that the default haproxy behaviour is to multiply the weight (specified in the route CR) by 16 as it builds its data structures for each backend. If no weight is specified then openshift-router sets the weight to 256. If you have many, many thousands of routes then this balloons quickly and leads to a significant increase in memory usage, as highlighted by customer cases attached to BZ#2007581.
The purpose of this issue is to both explore changing the openshift-router default weight (i.e., 256) to something smaller, or indeed unset (assuming no explicit weight has been requested), and to measure the memory usage within the context of the existing perf&scale tests that we use for vetting new haproxy releases.
It may be that the low-hanging change is to not default to weight=256 for backends that only have one pod replica (i.e., if no value specified, and there is only 1 pod replica, then don't default to 256 for that single server entry).
Outcome: does changing the [default] weight value make it feasible to switch back to "random" as the default balancing algorithm for a future OCP release.
Revert router to using "random" once again in 4.11 once analysis is done on impact of weight and static memory allocation.
Per the 4.6.30 Monitoring DNS Post Mortem, we should add E2E tests to openshift/cluster-dns-operator to reduce the risk that changes to our CoreDNS configuration break DNS resolution for clients.
To begin with, we can scope this story to 2 or 3 client libraries to establish a framework for testing DNS resolvers; the work of adding additional client libraries to this framework can be left for follow-up stories. Two common libraries are Go's resolver and glibc's resolver. A somewhat common library that is known to have quirks is musl libc's resolver, which uses a shorter timeout value than glibc's resolver and reportedly has issues with the EDNS0 protocol extension. It would also make sense to test Java or other popular languages or runtimes that have their own resolvers.
The ultimate goal is that the test will inform us when a change to OpenShift's DNS or networking has an effect that may impact end-user applications.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
During master nodes upgrade when nodes are getting drained there's currently no protection from two or more operands going down. If your component is required to be available during upgrade or other voluntary disruptions, please consider deploying PDB to protect your operands.
The effort is tracked in https://issues.redhat.com/browse/WRKLDS-293.
Example:
Acceptance Criteria:
1. Create PDB controller in console-operator for both console and downloads pods
2. Add e2e tests for PDB in single node and multi node cluster
Note: We should consider to backport this to 4.10
When viewing the Installed Operators list set to 'All projects' and then selecting an operator that is available in 'All namespaces' (globally installed,) upon clicking the operator to view its details the user is taken into the details of that operator in installed namespace (project selector will switch to the install namespace.)
This can be disorienting then to look at the lists of custom resource instances and see them all blank, since the lists are showing instances only in the currently selected project (the install namespace) and not across all namespaces the operator is available in.
It is likely that making use of the new Operator resource will improve this experience (CONSOLE-2240,) though that may still be some releases away. it should be considered if it's worth a "short term" fix in the meantime.
Note: The informational alert was not implemented. It was decided that since "All namespaces" is displayed in the radio button, the alert was not needed.
Goal
Add support for PDB (Pod Disruption Budget) to the console.
Requirements:
Designs:
Customers are asking for improvements to the upgrade experience (both over-the-air and disconnected). This is a feature tracking epics required to get that work done.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Goal
Add the ability to choose between a full cluster upgrade (which exists today) or control plane upgrade (which will pause all worker pools) in the console.
Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes.
Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc. It is all at a pool level, they will not be able to choose specific hosts.
Requirements
Design deliverables:
Goal
Improve the UX on the machine config pool page to reflect the new enhancements on the cluster settings that allows users to select the ability to update the control plane only.
Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes.
Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc. It is all at a pool level, they will not be able to choose specific hosts.
Requirements
Design deliverables:
Enable sharing ConfigMap and Secret across namespaces
Requirement | Notes | isMvp? |
---|---|---|
Secrets and ConfigMaps can get shared across namespaces | YES |
NA
NA
Consumption of RHEL entitlements has been a challenge on OCP 4 since it moved to a cluster-based entitlement model compared to the node-based (RHEL subscription manager) entitlement mode. In order to provide a sufficiently similar experience to OCP 3, the entitlement certificates that are made available on the cluster (OCPBU-93) should be shared across namespaces in order to prevent the need for cluster admin to copy these entitlements in each namespace which leads to additional operational challenges for updating and refreshing them.
Questions to be addressed:
* What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
* Does this feature have doc impact?
* New Content, Updates to existing content, Release Note, or No Doc Impact
* If unsure and no Technical Writer is available, please contact Content Strategy.
* What concepts do customers need to understand to be successful in [action]?
* How do we expect customers will use the feature? For what purpose(s)?
* What reference material might a customer want/need to complete [action]?
* Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
* What is the doc impact (New Content, Updates to existing content, or Release Note)?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer using SharedSecrets and ConfigMaps
I want to ensure all pods set readOnly; true on admission
So that I don't have pods stuck in the "Pending" state because of a bad volume mount
QE will need to verify the new Pod Admission behavior
Docs will need to ensure that readOnly: true is required and must be set to true.
None.
QE testing/verification of the feature - require readOnly to be true
Actions:
1. Create smoke test and submit to GitHub
2. Run script to integrate smoke test with Polarion
As an OpenShift engineer
I want the shared resource CSI Driver webhook to be installed with the cluster storage operator
So that the webhook is deployed when the CSI driver is deployed
None - no new functional capabilities will be added
None - we can verify in CI that we are deploying the webhook correctly.
None - no new functional capabilities will be added
The scope of this story is to just deploy the "hello world" webhook with the Cluster Storage Operator.
Adding the live ValidatingWebhook configuration and service will be done in a separate story.
As an OpenShift engineer,
I want to initialize a validating admission webhook for the shared resource CSI driver
So that I can eventually require readOnly: true to be set on all pods that use the Shared Resource CSI Driver
None.
None.
None.
This is a prerequisite for implementing the validating admission webhook.
We need to have ART build the container image downstream so that we can add the correct image references for the CVO.
If we reference images in the CVO manifests which do not have downstream counterparts, we break the downstream build for the payload.
CI is capable of producing multiple images for a GitHub repository. For example, github.com/openshift/oc produces 4-5 images with various capabilities.
We did similar work in BUILD-234 - some of these steps are not required.
See also:
Tasks:
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
https://issues.redhat.com/browse/AUTH-2 revealed that, in prinicipal, Pod Security Admission is possible to integrate into OpenShift while retaining SCC functionality.
This epic is about the concrete steps to enable Pod Security Admission by default in OpenShift
Enhancement - https://github.com/openshift/enhancements/pull/1010
ingress-operator must comply to pod security. The current audit warning is:
{ "objectRef": "openshift-ingress-operator/deployments/ingress-operator", "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.run AsNonRoot=true), seccompProfile (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }
dns-operator must comply to restricted pod security level. The current audit warning is:
{ "objectRef": "openshift-dns-operator/deployments/dns-operator", "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unre stricted capabilities (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.runAsNonRoot=tr ue), seccompProfile (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }HyperShift provisions OpenShift clusters with externally managed control-planes. It follows a slightly different process for provisioning clusters. For example, HyperShift uses cluster API as a backend and moves all the machine management bits to the management cluster.
showing machine management/cluster auto-scaling tabs in the console is likely to confuse users and cause unnecessary side effects.
See Design Doc: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
It's based on the SERVER_FLAG controlPlaneTopology being set to External is really the driving factor here; this can be done in one of two ways:
To test work related to cluster upgrade process, use a 4.10.3 cluster set on the candidate-4.10 upgrade channel using 4.11 frontend code.
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend kubeadmin notifier, from the global notifications, since it contain link for updating the cluster OAuth configuration (see attachment).
Based on Cesar's comment we should be removing the `Control Plane` section, if the infrastructure.status.controlplanetopology being "External".
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to remove the ability to “Add identity providers” under “Set up your Cluster”. In addition to the getting started card, we should remove the ability to update a cluster on the details card when applicable (anything that changes a cluster version should be read only).
Summary of changes to the overview page:
Check section 03 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend these notifications:
For these we will need to check `ControlPlaneTopology`, if it's set to 'External' and also check if the user can edit cluster version(either by creating a hook or an RBAC call, eg. `canEditClusterVersion`)
Check section 05 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need surface a message that the control plane is externally managed and add following changes:
In general, anything that changes a cluster version should be read only.
Check section 02 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#
PatternFly Dark Theme Handbook: https://docs.google.com/document/d/1mRYEfUoOjTsSt7hiqjbeplqhfo3_rVDO0QqMj2p67pw/edit
Admin Console -> Workloads & Pods
Dev Console -> Gotcha pages: Observe Dashboard and Metrics, Add, Pipelines: builder, list, log, and run
As a developer, I want to be able to scope the changes needed to enable dark mode for the admin console. As such, I need to investigate how much of the console will display dark mode using PF variables and also define a list of gotcha pages/components which will need special casing above and beyond PF variable settings.
Acceptance criteria:
As a developer, I want to be able to fix remaining issues from the spreadsheet of issues generated after the initial pass and spike of adding dark theme to the console.. As such, I need to make sure to either complete all remaining issues for the spreadsheet, or, create a bug or future story for any remaining issues in these two documents.
Acceptance criteria:
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
The Cluster Dashboard Details Card Protractor integration test was failing at high rate, and despite multiple attempts to fix, was never fully resolved, so it was disabled as a way to fix https://bugzilla.redhat.com/show_bug.cgi?id=2068594. Migrating this entire file to Cypress should give us better debugging capability, which is what was done to fix a similarly problematic project dashboard Protractor test.
In the 4.11 release, a console.openshift.io/default-i18next-namespace annotation is being introduced. The annotation indicates whether the ConsolePlugin contains localization resources. If the annotation is set to "true", the localization resources from the i18n namespace named after the dynamic plugin (e.g. plugin__kubevirt), are loaded. If the annotation is set to any other value or is missing on the ConsolePlugin resource, localization resources are not loaded.
In case these resources are not present in the dynamic plugin, the initial console load will be slowed down. For more info check BZ#2015654
AC:
Follow up of https://issues.redhat.com/browse/CONSOLE-3159
We need to provide a base for running integration tests using the dynamic plugins. The tests should initially
Once the basic framework is in place, we can update the demo plugin and add new integration tests when we add new extension points.
https://github.com/openshift/console/tree/master/frontend/dynamic-demo-plugin
https://github.com/openshift/enhancements/blob/master/enhancements/console/dynamic-plugins.md
https://github.com/openshift/console/tree/master/frontend/packages/console-plugin-sdk
We have a Timestamp component for consistent display of dates and times that we should expose through the SDK. We might also consider a hook that formats dates and times for places were you don't want or cant use the component, eg. times on a chart.
This will become important when we add a user preference for dates so that plugins show consistent dates and times as console. If I set my user preference to UTC dates, console should show UTC dates everywhere.
AC:
Currently, you need to navigate to
Cluster Settings ->
Global configuration ->
Console (operator) config ->
Console plugins
to see and managed plugins. This takes a lot of clicks and is not discoverable. We should look at surfacing plugin details where they're easier to find – perhaps on the Cluster Settings page – or at least provide a more convenient link somewhere in the UI.
AC: Add the Dynamic Plugins section to the Status Card in the overview that will contain:
Currently, enabled plugins can fail to load for a variety of reasons. For instance, plugins don't load if the plugin name in the manifest doesn't match the ConsolePlugin name or the plugin has an invalid codeRef. There is no indication in the UI that something has gone wrong. We should explore ways to report this problem in the UI to cluster admins. Depending on the nature of the issue, an admin might be able to resolve the issue or at least report a bug against the plugin.
The message about failing could appear in the notification drawer and/or console plugins tab on the operator config. We could also explore creating an alert if a plugin is failing.
AC:
Goal
Background
RFE: for 4.10, Cincinnati and the cluster-version operator are adding conditional updates (a.k.a. targeted edge blocking): https://issues.redhat.com/browse/OTA-267
High-level plans in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#update-client-support-for-the-enhanced-schema
Example of what the oc adm upgrade UX will be in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#cluster-administrator.
The oc implementation landed via https://github.com/openshift/oc/pull/961.
Design
See design doc: https://docs.google.com/document/d/1Nja4whdsI5dKmQNS_rXyN8IGtRXDJ8gXuU_eSxBLMIY/edit#
See marvel: https://marvelapp.com/prototype/h3ehaa4/screen/86077932
Update the cluster settings page to inform the user when the latest available update is supported but not recommended. Add an informational popover to the latest version in update path visualization.
The "Update Version" modal on the cluster settings page should be updated to give users information about recommended, not recommended, and blocked update versions.
Story: As an administrator I want to rely on a default configuration that spreads image registry pods across topology zones so that I don't suffer from a long recovery time (>6 mins) in case of a complete zone failure if all pods are impacted.
Background: The image registry currently uses affinity/anti-affinity rules to spread registry pods across different hosts. However this might cause situations in which all pods end up on hosts of a single zone, leading to a long recovery time of the registry if that zone is lost entirely. However due to problems in the past with the preferred setting of anti-affinity rule adherence the configuration was forced instead with required and the rules became constraints. With zones as constraints the internal registry would not have deployed anymore in environments with a single zone, e.g. internal CI environment. Pod topology constraints is a new API that is supported in OCP which can also relax constraints in case they cannot be satisfied. Details here: https://docs.openshift.com/container-platform/4.7/nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.html
Acceptance criteria:
Open Questions:
As an OpenShift administrator
I want to provide the registry operator with a custom certificate authority for S3 storage
so that I can use a third-party S3 storage provider.
Remove Jenkins from the OCP Payload.
See epic linking - need alternative non payload image available to provide relatively seamless migration
Also, the EP for this is approved and merged at https://github.com/openshift/enhancements/blob/master/enhancements/builds/remove-jenkins-payload.md
PARTIAL ANSWER ^^: confirmed with Ben Parees in https://coreos.slack.com/archives/C014MHHKUSF/p1646683621293839 that EP merging is currently sufficient OCP "technical leadership" approval.
assuming none
As maintainers of the OpenShift jenkins component, we need run Jenkins CI for PR testing against openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin, openshift/jenkins-openshift-login-plugin, using images built in the CI pipeline but not injected into CI test clusters via sample operator overriding the jenkins sample imagestream with the jenkins payload image.
As maintainers of the OpenShift Jenkins component, we need Jenkins periodics for the client and sync plugins to run against the latest non payload, CPaas image, promoted to CI's image locations on quay.io, for the current release in development.
As maintainers of the OpenShift Jenkins component, we need Jenkins related tests outside of very basic Jenkins Pipieline Strategy Build Config verification, removed from openshift-tests in OpenShift Origin, using a non-payload, CPaas image pertinent to the branch in question.
High Level, we ideally want to vet the new CPaas image via CI and periodics BEFORE we start changing the samples operator so that it does not manipulate the jenkins imagestream (our tests will override the samples operator override)
NONE ... QE should wait until JNKS-254
NONE
NONE
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Possible staging
1) before CPaas is available, we can validate images generated by PRs to openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin by taking the image built by the image (where the info needed to get the right image from the CI registry is in the IMAGE_FORMAT env var) and then doing an `oc tag --source=docker <PR image ref> openshift/jenkins:2` to replace the use of the payload image in the jenkins imagestream in the openshift namespace with the PRs image
2) insert 1) in https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/sync-plugin/e2e/jenkins-sync-plugin-e2e-commands.sh and https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/client-plugin/tests/jenkins-client-plugin-tests-commands.sh where you test for IMAGE_FORMAT being set
3) or instead of 2) you update the Makefiles for the plugins to call a script that does the same sort of thing, see what is in IMAGE_FORMAT, and if it has something, do the `oc tag`
https://github.com/openshift/release/pull/26979 is a prototype of how to stick the image built from a PR and conceivably the periodics to get the image built from it and tag it into the jenkins imagestream in the openshift namespace in the test cluster
After installing or upgrading to the latest OCP version, the existing OpenShift route to the prometheus-k8s service is updated to be a path-based route to '/api/v1'.
DoD:
Following up on https://issues.redhat.com/browse/MON-1320, we added three new CLI flags to Prometheus to apply different limits on the samples' labels. These new flags are available starting from Prometheus v2.27.0, which will most likely be shipped in OpenShift 4.9.
The limits that we want to look into for OCP are the following ones:
# Per-scrape limit on number of labels that will be accepted for a sample. If # more than this number of labels are present post metric-relabeling, the # entire scrape will be treated as failed. 0 means no limit. [ label_limit: <int> | default = 0 ] # Per-scrape limit on length of labels name that will be accepted for a sample. # If a label name is longer than this number post metric-relabeling, the entire # scrape will be treated as failed. 0 means no limit. [ label_name_length_limit: <int> | default = 0 ] # Per-scrape limit on length of labels value that will be accepted for a sample. # If a label value is longer than this number post metric-relabeling, the # entire scrape will be treated as failed. 0 means no limit. [ label_value_length_limit: <int> | default = 0 ]
We could benefit from them by setting relatively high values that could only induce unbound cardinality and thus reject the targets completely if they happened to breach our constrainst.
DoD:
When users configure CMO to interact with systems outside of an OpenShift cluster, we want to provide an easy way to add the cluster ID to the data send.
Technically this can be achieved today, by adding an identifying label to the remote_write configuration for a given cluster. The operator adding the remote_write integration needs to take care that the label is unique over the managed fleet of clusters. This however adds management complexity. Any given cluster already has a pseudo-unique datum, that can be used for this purpose.
Expose a flag in the CMO configuration, that is false by default (keeps backward compatibility) and when set to true will add the _id label to a remote_write configuration. More specifically it will be added to the top of a remote_write relabel_config list via the replace action. This will add the label as expect, but additionally a user could alter this label in a later relabel config to suit any specific requirements (say rename the label or add additional information to the value).
The location of this flag is the remote_write Spec, so this can be set for individual remote_write configurations.
We currently use a sample app to e2e test remote write in CMO.
In order to test the addition of the cluster_id relabel config, we need to confirm that the metrics send actually have the expected label.
For this test we should use Prometheus as the remote_write target. This allows us to query the metrics send via remote write and confirm they have the expected label.
Add an optional boolean flag to CMOs definition of RemoteWriteSpec that if true adds an entry in the specs WriteRelabelConfigs list.
I went with adding the relabel config to all user-supplied remote_write configurations. This path has no risk for backwards compatibility (unless users use the {}tmp_openshift_cluster_id{} label, seems unlikely) and reduces overall complexity, as well as documentation complexity.
The entry should look like what is already added to the telemetry remote write config and it should be added as the first entry in the list, before any user supplied relabel configs.
The potential target ServiceMonitors are:
As a user, I want the topology view to be less cluttered as I doom out showing only information that I can discern and still be able to get a feel for the status of my project.
As a user, I want to understand which service bindings connected a service to a component successfully or not. Currently it's really difficult to understand and needs inspection into each ServiceBinding resource (yaml).
See also https://docs.google.com/document/d/1OzE74z2RGO5LPjtDoJeUgYBQXBSVmD5tCC7xfJotE00/edit
This epic is mainly focused on the 4.10 Release QE activities
1. Identify the scenarios for automation
2. Segregate the test Scenarios into smoke, Regression and other user stories
a. Update the https://docs.jboss.org/display/ODC/Automation+Status+Report
3. Align with layered operator teams for updating scripts
3. Work closely with dev team for epic automation
4. Create the automation scripts using cypress
5. Implement CI for nightly builds
6. Execute scripts on sprint basis
To the track the QE progress at one place in 4.10 Release Confluence page
Acceptance criteria:
This epic covers a number of customer requests(RFEs) as well as increases usability.
Customer satisfaction as well as improved usability.
None
As a user, I should be able to switch between the form and yaml editor while creating the ProjectHelmChartRepository CR.
Form component https://github.com/openshift/console/pull/11227
As a user, I want to use a form to create Deployments
Edit deployment form ODC-5007
Currently we are only able to get limited telemetry from the Dev Sandbox, but not from any of our managed clusters or on prem clusters.
In order to improve properly analyze usage and the user experience, we need to be able to gather as much data as possible.
// JS type
telemetry?: Record<string, string>
./bin/bridge --telemetry SEGMENT_API_KEY=a-key-123-xzy ./bin/bridge --telemetry CONSOLE_LOG=debug
Goal:
Enhance oc adm release new (and related verbs info, extract, mirror) with heterogeneous architecture support
tl;dr
oc adm release new (and related verbs info, extract, mirror) would be enhanced to optionally allow the creation of manifest list release payloads. The manifest list flow would be triggered whenever the CVO image in an imagestream was a manifest list. If the CVO image is a standard manifest, the generated release payload will also be a manifest. If the CVO image is a manifest list, the generated release payload would be a manifest list (containing a manifest for each arch possessed by the CVO manifest list).
In either case, oc adm release new would permit non-CVO component images to be manifest or manifest lists and pass them through directly to the resultant release manifest(s).
If a manifest list release payload is generated, each architecture specific release payload manifest will reference the same pullspecs provided in the input imagestream.
More details in Option 1 of https://docs.google.com/document/d/1BOlPrmPhuGboZbLZWApXszxuJ1eish92NlOeb03XEdE/edit#heading=h.eldc1ppinjjh
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
I asked Zvonko Kaiser and he seemed open to it. I need to confirm with Shiva Merla
Rename Provider to Infrastructure Provider
Add GPU Provider
https://miro.com/app/board/uXjVOeUB2B4=/?moveToWidget=3458764514332229879&cot=14
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
Users on a fully-disconnected cluster could not see Devfiles in the developer catalog or import a Devfiles. That's fine.
But the API calls /api/devfile/samples/ and /api/devfile/ takes 30 seconds until they fail with a 504 Gateway timeout error.
If possible they should fail immediately.
Version-Release number of selected component (if applicable):
This might happen since 4.8
Tested this yet only on 4.12.0-0.nightly-2022-09-07-112008
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
The console Pod log contains this error:
E0909 10:28:18.448680 1 devfile-handler.go:74] Failed to parse devfile: failed to populateAndParseDevfile: Get "https://registry.devfile.io/devfiles/go": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
This fix contains the following changes coming from updated version of kubernetes up to v1.24.10:
Changelog:
v1.24.11: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v12410
v1.24.10: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1249
v1.24.9: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1248
v1.24.8: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1247
v1.24.7: https://github.com/kubernetes/kubernetes/blob/release-1.24/CHANGELOG/CHANGELOG-1.24.md#changelog-since-v1246
Description of problem:
Manual backport of * https://github.com/openshift/cluster-dns-operator/pull/336 * https://github.com/openshift/cluster-dns-operator/pull/339
Version-Release number of selected component (if applicable):
4.11
Description of problem:
Disconnected IPI OCP 4.10.22 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.10.22
How reproducible: Perform disconnected IPI install of OCP 4.10.22 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.10.22 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.10.22 cluster install on baremetal fails when hostname of master nodes does not include "master"
The code for the cluster-baremetal-operator at the following link:
The following condition is concerning:
if strings.Contains(bmh.Name, "master") && len(bmh.Spec.BootMACAddress) > 0
The packages reveal that bmh.Name references the name inside the metadata of the BMH object.
Should a customer have masters with names that do not include the text "master", the above condition can never become true, and so, the following slice is never created :
macs = append(macs, bmh.Spec.BootMACAddress)
This is a clone of issue OCPBUGS-7960. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-7780. The following is the description of the original issue:
—
Description of problem:
4.9 and 4.10 oc calls to oc adm upgrade channel ... for 4.11+ clusters would clear spec.capabilities. Not all that many clusters try to restrict capabilities, but folks will need to bump their channel for at least every other minor (if their using EUS channels), and while we recommend folks use an oc from the 4.y they're heading towards, we don't have anything in place to enforce that.
Version-Release number of selected component (if applicable):
4.9 and 4.10 oc are exposed vs. the new-in-4.11 spec.capabilities. Newer oc could theoretically be exposed vs. any new ClusterVersion spec capabilities.
How reproducible:
100%
Steps to Reproduce:
1. Install a 4.11+ cluster with None capabilities.
2. Set the channel with a 4.10.51 oc, like oc adm upgrade channel fast-4.11.
3. Check the capabilities with oc get -o json clusterversion version | jq -c .spec.capabilities.
Actual results:
null
Expected results:
{"baselineCapabilitySet":"None"}
This is a clone of issue OCPBUGS-10943. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10661. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10591. The following is the description of the original issue:
—
Description of problem:
Starting with 4.12.0-0.nightly-2023-03-13-172313, the machine API operator began receiving an invalid version tag either due to a missing or invalid VERSION_OVERRIDE(https://github.com/openshift/machine-api-operator/blob/release-4.12/hack/go-build.sh#L17-L20) value being passed tot he build. This is resulting in all jobs invoked by the 4.12 nightlies failing to install.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-03-13-172313 and later
How reproducible:
consistently in 4.12 nightlies only(ci builds do not seem to be impacted).
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Example of failure https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-csi/1635331349046890496/artifacts/e2e-aws-csi/gather-extra/artifacts/pods/openshift-machine-api_machine-api-operator-866d7647bd-6lhl4_machine-api-operator.log
Tracker issue for bootimage bump in 4.11. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-3362.
This is a clone of issue OCPBUGS-8044. The following is the description of the original issue:
—
rhbz#2089199, backported to 4.11.5, shifted the etcd Grafana dashboard from the monitoring operator to the etcd operator. During the shift, the ConfigMap was renamed from grafana-dashboard-etcd to etcd-dashboard. However, we did not include logic for garbage-collecting the obsolete dasboard, so clusters that update from 4.11.1 and similar into 4.11.>=5 or 4.12+ currently end up with both the obsolete and new ConfigMaps. We should grow code to remove the obsolete ConfigMap.
4.11.>=5 and 4.12+ are currently exposed.
100%
1. Install 4.11.1.
2. Update to a release that defines the etcd-dashboard ConfigMap.
3. Check for etcd dashboards with oc -n openshift-config-managed get configmaps | grep etcd.
Both etcd-dashboard and grafana-dashboard-etcd exist:
$ oc -n openshift-config-managed get configmaps | grep etcd etcd-dashboard 1 196d grafana-dashboard-etcd 1 2y282d
Another example is 4.11.1 to 4.11.5 CI:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/1570415394001260544/artifacts/e2e-aws-upgrade/configmaps.json | jq -r '.items[].metadata | select(.namespace == "openshift-config-managed" and (.name | contains("etcd"))) | .name' etcd-dashboard grafana-dashboard-etcd
Only etcd-dashboard still exists.
A new manifest for the outgoing ConfigMap that sets the release.openshift.io/delete: "true" annotation would ask the cluster-version operator to reap the obsolete ConfigMap.
Description of problem:
IPI installation failed against 4.11 nightly build on Azure Stack Hub WWT. workers machines failed to be provisioned as using the wrong image which does not exist. $ oc get machine -n openshift-machine-api NAME PHASE TYPE REGION ZONE AGE jima411az-xvwvg-master-0 Running Standard_DS4_v2 mtcazs 3h10m jima411az-xvwvg-master-1 Running Standard_DS4_v2 mtcazs 3h10m jima411az-xvwvg-master-2 Running Standard_DS4_v2 mtcazs 3h10m jima411az-xvwvg-worker-mtcazs-4k6hs Failed 3h3m jima411az-xvwvg-worker-mtcazs-fc6s5 Failed 3h3m jima411az-xvwvg-worker-mtcazs-sclgf Failed 3h3m $ oc describe machine jima411az-xvwvg-worker-mtcazs-4k6hs -n openshift-machine-api ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 3h4m azure-controller InvalidConfiguration: failed to reconcile machine "jima411az-xvwvg-worker-mtcazs-4k6hs": failed to create vm jima411az-xvwvg-worker-mtcazs-4k6hs: failure sending request for machine jima411az-xvwvg-worker-mtcazs-4k6hs: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="NotFound" Message="The Image '/subscriptions/de7e09c3-b59a-4c7d-9c77-439c11b92879/resourceGroups/jima411az-xvwvg-rg/providers/Microsoft.Compute/images/jima411az-xvwvg-gen2' cannot be found in 'mtcazs' region." The image created on ASH is `/subscriptions/de7e09c3-b59a-4c7d-9c77-439c11b92879/resourceGroups/jima411az-xvwvg-rg/providers/Microsoft.Compute/images/jima411az-xvwvg` # az resource list -g jima411az-xvwvg-rg --query "[?type=='Microsoft.Compute/images']" [ { "changedTime": "2023-08-03T01:08:03.369233+00:00", "createdTime": "2023-08-03T00:57:34.228352+00:00", "id": "/subscriptions/de7e09c3-b59a-4c7d-9c77-439c11b92879/resourceGroups/jima411az-xvwvg-rg/providers/Microsoft.Compute/images/jima411az-xvwvg", "identity": null, "kind": null, "location": "mtcazs", "managedBy": null, "name": "jima411az-xvwvg", "plan": null, "properties": null, "provisioningState": "Succeeded", "resourceGroup": "jima411az-xvwvg-rg", "sku": null, "tags": {}, "type": "Microsoft.Compute/images" } ]
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2023-07-29-013834
How reproducible:
Always on 4.11
Steps to Reproduce:
1. Deploy IPI with 4.11 on ASH WWT 2. 3.
Actual results:
worker nodes failed to be provisioned
Expected results:
Installation is successful.
Additional info:
Detect issue on 4.11, IPI installation works well on 4.12+.
This is a clone of issue OCPBUGS-1417. The following is the description of the original issue:
—
Description of problem:
Egress IP is not being assigned to primary interface of node as per hostsubnet definition. The issue being observed at an Openshift cluster hosted on Disconnected AWS environment. Following steps were performed at AWS end: - Disconnected VPC was created and installation of Openshift was done as per documentation. - Elastic IP could not be used as it is a disconnected environment. Customer identified a free IP from same subnet as the node and modified interface of the node to add a secondary IP. It seems cloud.network.openshift.io/egress-ipconfig annotation is need on the node to attach IP to primary interface but its missing. From SDN POD log on the same node I could see its complaining about 'an incomplete annotation "cloud.network.openshift.io/egress-ipconfig"'. Will share more details over comments.
Version-Release number of selected component (if applicable):
Openshift 4.10.28
How reproducible:
Always
Steps to Reproduce:
1. Create a disconnected environment on AWS 2. find a free IP from subnet where a worker node is hosted and add that as secondary IP to NIC of that node. 3. Configure hostsubnet and netnamespace on Openshift cluster
Actual results:
- Eress IP is not being attached to primary interface of node for which hostsubnet has been configured
Expected results:
- Egress IP should get configured without any issue.
Additional info:
This is a clone of issue OCPBUGS-6766. The following is the description of the original issue:
—
This is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2083087 (OCPBUGSM-44070) to backport this issue.
Description of problem:
"Delete dependent objects of this resource" is a bit of confusing for some users because when creating the Application in Dev console not only the deployment but also IS, route, svc, secret objects will be created as well. When deleting the Application (in fact it is deployment), there is an option called "Delete dependent objects of this resource" and some users might think this means the IS, route, svc and any other objects which are created alongside with the deployment will be deleted as well
Version-Release number of selected component (if applicable):
4.8
How reproducible:
Always
Steps to Reproduce:
1. Create Application in Dev console
2. Delete the deployment
3. Check "Delete dependent objects of this resource"
Actual results:
Only deployment will be deleted and IS, svc, route will not be deleted
Expected results:
We either change the description of this option, or we really delete IS, svc, route and any other objects created under this Application.
Additional info:
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
This bug is a backport clone of [Bugzilla Bug 2094174](https://bugzilla.redhat.com/show_bug.cgi?id=2094174). The following is the description of the original bug:
—
Created attachment 1887340
CVO log file
Description of problem:
Clearing upgrade after signature verification fails, ReleaseAccepted=False keeps complaining about the update cannot be verified blah blah.
,
,
,
,
,
{ "lastTransitionTime": "2022-06-07T01:56:17Z", "message": "Cluster version is 4.11.0-0.nightly-2022-06-06-025509", "status": "False", "type": "Progressing" }]
Version-Release number of the following components:
4.11.0-0.nightly-2022-06-06-025509
How reproducible:
1/1
Steps to Reproduce:
1. Upgrade to a fake release
2. Check ReleaseAccepted=False due to target image signature verification failure
ReleaseAccepted=False
Reason: RetrievePayload
Message: Retrieving payload failed version="" image="registry.ci.openshift.org/ocp/release@sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100" failure=The update cannot be verified: unable to verify sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100 against keyrings: verifier-public-key-redhat
Upstream is unset, so the cluster will use an appropriate default.
Channel: stable-4.11
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.11.0-0.nightly-2022-06-04-014713 not found in the "stable-4.11" channel
3. Clear the upgrade
4. Check oc adm upgrade info
ReleaseAccepted=False
Reason: RetrievePayload
Message: Retrieving payload failed version="" image="registry.ci.openshift.org/ocp/release@sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100" failure=The update cannot be verified: unable to verify sha256:5967359c2bfee0512030418af0f69faa3fa74a81a89ad64a734420e020e7f100 against keyrings: verifier-public-key-redhat
Upstream is unset, so the cluster will use an appropriate default.
Channel: stable-4.11
warning: Cannot display available updates:
Reason: VersionNotFound
Message: Unable to retrieve available updates: currently reconciling cluster version 4.11.0-0.nightly-2022-06-04-014713 not found in the "stable-4.11" channel
Actual results:
After upgrade is cleared, cv condition ReleaseAccepted keeps to false with message The update cannot be verified
Expected results:
After upgrade is cleared, cv condition ReleaseAccepted should stop complaining about the target image
Additional info:
Please attach logs from ansible-playbook with the -vvv flag
This is a clone of issue OCPBUGS-4250. The following is the description of the original issue:
—
Description of problem:
Added a script to collect PodNetworkConnectivityChecks to able to view the overall status of the pod network connectivity. Current must-gather collects the contents of `openshift-network-diagnostics` but does not collect the PodNetworkConnectivityCheck.
Version-Release number of selected component (if applicable):
4.12, 4.11, 4.10
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Description of problem:
Upgrade to 4.10 is stuck looping in syncEgressFirewall We see transacting operations with context deadline exceeded. It looks to be trying to process 2.8 million records is one go. 2023-02-21T19:55:06.514097513Z I0221 19:55:06.435220 1 client.go:781] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:6a3ad543-a77d-4700-83b8-5ccae6b2d067} ID:1c5297ff-8588-467a-93f4-22f22d609563} {GoUUID:f6288ed3-3928-45a8-ae57-40ed94cfa249} {GoUUID:04bf90c2-fde1-4a10-baaa-6a3f1d8e2931} {GoUUID:c6609536-857c-48ae-9125-9505753180 a8} {GoUUID:c79b4398-d7cc-4dcf-8c1d-11484f318324} {GoUUID:4323ac2c-033e-43c3-885b-e951cd7a4159} {GoUUID:7b316a80-076f-4266-b7d2-bd69b1d4b874} {GoUUID:57dfecb2-2f94-4cd8-a277-8 b28205e1048} {GoUUID:2c039f15-ff11-4ceb-aa82-bcbe82fc86d1} {GoUUID:063c4121-73c3-4d53-a89d-1063e775146b} {GoUUID:25c788e3-6146-4571-98bf-61010100a22a} {GoUUID:3d3c150f-1296-4d 91-b334-506f28bff4bd}]}}] Timeout:<nil> Where:[where column _uuid == {ba9652de-5aae-4a74-a512-29f775e38c19}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]: context deadline exceeded 2023-02-21T19:55:18.739739417Z E0221 19:55:18.643127 1 master.go:1369] Failed (will retry) in syncing syncEgressFirewall: failed to remove reject acl from node logical switches: error while removing ACLS: [6a3ad543-a77d-4700-83b8-5ccae6b2d067 8e004991-0382-455f-9901-33ef724acbc2 Everything is built into one operation via: https://github.com/openshift/ovn-kubernetes/blob/release-4.10/go-controller/pkg/libovsdbops/switch.go#L243 TrandactAndCheck is being called with a 10s timeout and this operation never completes.
Version-Release number of selected component (if applicable):
4.10.50
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Upgrade completes
Additional info:
Description of problem:
The ovn-kubernetes ovnkube-master containers are continuously crashlooping since we updated to 4.11.0-0.okd-2022-10-15-073651.
Log Excerpt:
] [] [] [{kubectl-client-side-apply Update networking.k8s.io/v1 2022-09-12 12:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:&v1.LabelSelector{MatchLabels:map[string]string{access: true,},MatchExpressions:[]LabelSelectorRequirement{},},NamespaceSelector:nil,IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},} &NetworkPolicy{ObjectMeta:{allow-from-openshift-ingress compsci-gradcentral a405f843-c250-40d7-8dd4-a759f764f091 217304038 1 2022-09-22 14:36:38 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-apiserver Update networking.k8s.io/v1 2022-09-22 14:36:38 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:policyTypes":{}}} }]},Spec:NetworkPolicySpec{PodSelector:{map[] []},Ingress:[]NetworkPolicyIngressRule{NetworkPolicyIngressRule{Ports:[]NetworkPolicyPort{},From:[]NetworkPolicyPeer{NetworkPolicyPeer{PodSelector:nil,NamespaceSelector:&v1.LabelSelector{MatchLabels:map[string]string{policy-group.network.openshift.io/ingress: ,},MatchExpressions:[]LabelSelectorRequirement{},},IPBlock:nil,},},},},Egress:[]NetworkPolicyEgressRule{},PolicyTypes:[Ingress],},}]: cannot clean up egress default deny ACL name: error in transact with ops [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {ccdd01bf-3009-42fb-9672-e1df38190cd7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:delete Value:{GoSet:[{GoUUID:60cb946a-46e9-4623-9ba4-3cb35f018ed6}]}}] Timeout:<nil> Where:[where column _uuid == {10bbf229-8c1b-4c62-b36e-4ba0097722db}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {7b55ba0c-150f-4a63-9601-cfde25f29408}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:delete Table:ACL Row:map[] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {60cb946a-46e9-4623-9ba4-3cb35f018ed6}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:referential integrity violation Details:cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s) UUID:{GoUUID:} Rows:[]}] and errors []: referential integrity violation: cannot delete ACL row 7b55ba0c-150f-4a63-9601-cfde25f29408 because of 1 remaining reference(s)
Additional info:
https://github.com/okd-project/okd/issues/1372 Issue persisted through update to 4.11.0-0.okd-2022-10-28-153352 must-gather: https://nbc9-snips.cloud.duke.edu/snips/must-gather.local.2859117512952590880.zip
Description of problem:
After setting the appsDomain flag in the OpenShift Ingress configuration, all Routes that are generated are based off this domain. This includes OpenShift-core Routes, which can result in failing health-checks, such as seen with the OpenShift Web Console. According to the documentation[0], the appsDomain value should only be used by user-created Routes and is not intended to be seen when Openshift components are reconciled.
Version-Release number of selected component (if applicable):
OpenShift 4.12
How reproducible:
Everytime
Steps to Reproduce:
1. Provision a new cluster 2. Configure appsDomain value 3. Delete the OpenShift Console Route: oc delete routes -n openshift-console console 4. The Route is automatically reconciled
Actual results:
The re-created Route uses the appsDomain value, resulting in failed health-checks throughout the cluster.
Expected results:
The Console and all other OpenShift-core components should use the default domain.
Additional info:
Reviewing the appsDomain code, if the appsDomain value is configured this takes precedent for all Routes. [3][1][2] Resources: [0] https://docs.openshift.com/container-platform/4.9/networking/ingress-operator.html#nw-ingress-configuring-application-domain_configuring-ingress [1] Sets Route Subdomain to appsdomain if it's present, otherwise uses the default domain https://github.com/openshift/cluster-openshift-apiserver-operator/blob/d2182396d647839f61b027602578ac31c9437e7d/pkg/operator/configobservation/ingresses/observe_ingresses.go#L16-L60 [2] Sets up the Route generator https://github.com/openshift/openshift-apiserver/blob/9c381fc873b35c074f7c0c485893e038828c1405/pkg/cmd/openshift-apiserver/openshiftapiserver/config.go#L217-L218 [3] Where Route.spec.host is generated: https://github.com/openshift/openshift-apiserver/blob/9c381fc873b35c074f7c0c485893e038828c1405/vendor/github.com/openshift/library-go/pkg/route/hostassignment/plugin.go#L22-L47
Description of problem:
Cluster running 4.10.52 had three aws-ebs-csi-driver-node pods begin to consume multiple GB of memory, causing heavy node memory pressure as the pods have no memory limit. All other aws-ebs-csi-driver-node pods were still in the 50-70MB range: NAME CPU(cores) MEMORY(bytes) aws-ebs-csi-driver-controller-59867579b-d6s2q 0m 397Mi aws-ebs-csi-driver-controller-59867579b-t4wgq 0m 276Mi aws-ebs-csi-driver-node-4rmvk 0m 53Mi aws-ebs-csi-driver-node-5799f 0m 50Mi aws-ebs-csi-driver-node-6dpvg 0m 59Mi aws-ebs-csi-driver-node-6ldzk 0m 65Mi aws-ebs-csi-driver-node-6mbk5 0m 54Mi aws-ebs-csi-driver-node-bkvsr 0m 50Mi aws-ebs-csi-driver-node-c2fb2 0m 62Mi aws-ebs-csi-driver-node-f422m 0m 61Mi aws-ebs-csi-driver-node-lwzbb 6m 1940Mi aws-ebs-csi-driver-node-mjznt 0m 53Mi aws-ebs-csi-driver-node-pczsj 0m 62Mi aws-ebs-csi-driver-node-pmskn 0m 3493Mi aws-ebs-csi-driver-node-qft8w 0m 68Mi aws-ebs-csi-driver-node-v5bpx 11m 2076Mi aws-ebs-csi-driver-node-vn8km 0m 84Mi aws-ebs-csi-driver-node-ws6hx 0m 73Mi aws-ebs-csi-driver-node-xsk7k 0m 59Mi aws-ebs-csi-driver-node-xzwlh 0m 55Mi aws-ebs-csi-driver-operator-8c5ffb6d4-fk6zk 5m 88Mi Deleting the pods caused them to recreate, with normal memory consumption levels.
Version-Release number of selected component (if applicable):
4.10.52
How reproducible:
Unknown
Description of problem:
[OVN][OSP] After reboot egress node, egress IP cannot be applied anymore.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-07-181244
How reproducible:
Frequently happened in automation. But didn't reproduce it in manual.
Steps to Reproduce:
1. Label one node as egress node 2. Config one egressIP object STEP: Check one EgressIP assigned in the object. Nov 8 15:28:23.591: INFO: egressIPStatus: [{"egressIP":"192.168.54.72","node":"huirwang-1108c-pg2mt-worker-0-2fn6q"}] 3. Reboot the node, wait for the node ready.
Actual results:
EgressIP cannot be applied anymore. Waited more than 1 hour. oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47031 192.168.54.72
Expected results:
The egressIP should be applied correctly.
Additional info:
Some logs E1108 07:29:41.849149 1 egressip.go:1635] No assignable nodes found for EgressIP: egressip-47031 and requested IPs: [192.168.54.72] I1108 07:29:41.849288 1 event.go:285] Event(v1.ObjectReference{Kind:"EgressIP", Namespace:"", Name:"egressip-47031", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'NoMatchingNodeFound' no assignable nodes for EgressIP: egressip-47031, please tag at least one node with label: k8s.ovn.org/egress-assignable W1108 07:33:37.401149 1 egressip_healthcheck.go:162] Could not connect to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107): context deadline exceeded I1108 07:33:37.401348 1 master.go:1364] Adding or Updating Node "huirwang-1108c-pg2mt-worker-0-2fn6q" I1108 07:33:37.437465 1 egressip_healthcheck.go:168] Connected to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107)
After this log, seems like no logs related to "192.168.54.72" happened.
Description of problem:
Looks like a regression was introduced and the Windows networking tests started to fail accross all 4.11 CI jobs for both Linux-to-Windows and Windows-to-Windows pods communication. https://github.com/openshift/ovn-kubernetes/pull/1415 merged without passing Windows networking tests. https://github.com/openshift/windows-machine-config-operator/pull/1359 shows networking tests failing accross all CI jobs.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Trigger a job retest in any of the above mentioned PRs
Actual results:
Windows networking test fail
Expected results:
Windows networking test pass
Additional info:
Direct link to a release-4.11 aws-e2e-operator CI job showing the failed networking test.
This is a clone of issue OCPBUGS-5016. The following is the description of the original issue:
—
Description of problem:
When editing any pipeline in the openshift console, the correct content cannot be obtained (the obtained information is the initial information).
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> YAML view -> Cancel -> Actions -> Edit Pipeline -> YAML view
Actual results:
displayed content is incorrect.
Expected results:
Get the content of the current pipeline, not the "pipeline create" content.
Additional info:
If cancel or save in the "Pipeline Builder" interface after "Edit Pipeline", can get the expected content. ~ Developer -> Pipeline -> select pipeline -> Details -> Actions -> Edit Pipeline -> Pipeline builder -> Cancel -> Actions -> Edit Pipeline -> YAML view :Display resource content normally ~
This is a clone of issue OCPBUGS-884. The following is the description of the original issue:
—
Description of problem:
Since the decomissioning of the psi cluster, and subsequent move of the rhcos release browser, product builds machine-os-images builds have been failing. See e.g. https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=47565717
Version-Release number of selected component (if applicable):
4.12, 4.11, 4.10.
How reproducible:
Have ART build the image
Steps to Reproduce:
1. Have ART build the image
Actual results:
Build failure
Expected results:
Build succesful
Additional info:
Description of problem:
Sometimes we see VMs fail to power on when the land on a host that does not have enough resources. The current power on does not retry or leverage DRS to power on the node on a suitable host.
https://github.com/vmware/govmomi/issues/1026
Our code is still making calls to PowerOnVM_Task which, according to the vsphere docs, is deprecated and we should use PowerOnMultiVM_Task instead.
PowerOnVM_Task does not return a DRS ClusterRecommendation, no vmotion nor host power operations will be done as part of a DRS-facilitated power on. To have DRS consider such operations use PowerOnMultiVM_Task.
https://vdc-download.vmware.com/vmwb-repository/dcr-public/b50dcbbf-051d-4204-a3e7-e1b618c1e384/538cf2ec-b34f-4bae-a332-3820ef9e7773/vim.VirtualMachine.html#powerOn:
As of vSphere API 5.1, use of this method with vCenter Server is deprecated; use PowerOnMultiVM_Task instead.
Version-Release number of selected component (if applicable):
4.8.x
How reproducible:
Always
Steps to Reproduce:
1.
2.
3.
Actual results:
Sometimes powers on fails requiring manual intervention.
Expected results:
PowerOn should use DRS to ensure it's always successful.
Additional info:
This is a clone of issue OCPBUGS-4805. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-4101. The following is the description of the original issue:
—
Description of problem:
We experienced two separate upgrade failures relating to the introduction of the SYSTEM_RESERVED_ES node sizing parameter, causing kubelet to stop running. One cluster (clusterA) upgraded from 4.11.14 to 4.11.17. It experienced an issue whereby /etc/node-sizing.env on its master nodes contained an empty SYSTEM_RESERVED_ES value: --- cat /etc/node-sizing.env SYSTEM_RESERVED_MEMORY=5.36Gi SYSTEM_RESERVED_CPU=0.11 SYSTEM_RESERVED_ES= --- causing the kubelet to not start up. To restore service, this file was manually updated to set a value (1Gi), and kubelet was restarted. We are uncertain what conditions led to this occuring on the clusterA master nodes as part of the upgrade. A second cluster (clusterB) upgraded from 4.11.16 to 4.11.17. It experienced an issue whereby worker nodes were impacted by a similar problem, however this was because a custom node-sizing-enabled.env MachineConfig which did not set SYSTEM_RESERVED_ES This caused existing worker nodes to go into a NotReady state after the ugprade, and additionally new nodes did not join the cluster as their kubelet would become impacted. For clusterB the conditions are more well-known of why the value is empty. However, for both clusters, if SYSTEM_RESERVED_ES ends up as empty on a node it can cause the kubelet to not start. We have some asks as a result: - Can MCO be made to recover from this situation if it occurs, perhaps through application of a safe default if none exists, such that kubelet would start correctly? - Can there possibly be alerting that could indicate and draw attention to the misconfiguration?
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Have not been able to reproduce it on a fresh cluster upgrading from 4.11.16 to 4.11.17
Expected results:
If SYSTEM_RESERVED_ES is empty in /etc/node-sizing*env then a default should be applied and/or kubelet able to continue running.
Additional info:
This is a clone of issue OCPBUGS-13739. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-13692. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-13549. The following is the description of the original issue:
—
Description of problem:
Incorrect AWS ARN [1] is used for GovCloud and AWS China regions, which will cause the command `ccoctl aws create-all` to fail: Failed to create Identity provider: failed to apply public access policy to the bucket ci-op-bb5dgq54-77753-oidc: MalformedPolicy: Policy has invalid resource status code: 400, request id: VNBZ3NYDH6YXWFZ3, host id: pHF8v7C3vr9YJdD9HWamFmRbMaOPRbHSNIDaXUuUyrgy0gKCO9DDFU/Xy8ZPmY2LCjfLQnUDmtQ= Correct AWS ARN prefix: GovCloud (us-gov-east-1 and us-gov-west-1): arn:aws-us-gov AWS China (cn-north-1 and cn-northwest-1): arn:aws-cn [1] https://github.com/openshift/cloud-credential-operator/pull/526/files#diff-1909afc64595b92551779d9be99de733f8b694cfb6e599e49454b380afc58876R211
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2023-05-11-024616
How reproducible:
Always
Steps to Reproduce:
1. Run command: `aws create-all --name="${infra_name}" --region="${REGION}" --credentials-requests-dir="/tmp/credrequests" --output-dir="/tmp"` on GovCloud regions 2. 3.
Actual results:
Failed to create Identity provider
Expected results:
Create resources successfully.
Additional info:
Related PRs: 4.10: https://github.com/openshift/cloud-credential-operator/pull/531 4.11: https://github.com/openshift/cloud-credential-operator/pull/530 4.12: https://github.com/openshift/cloud-credential-operator/pull/529 4.13: https://github.com/openshift/cloud-credential-operator/pull/528 4.14: https://github.com/openshift/cloud-credential-operator/pull/526
This is a clone of issue OCPBUGS-2181. The following is the description of the original issue:
—
Description of problem:
E2E test Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name
Description of problem:
For OVNK to become CNCF complaint, we need to support session affinity timeout feature and enable the e2e's on OpenShift side. This bug tracks the efforts to get this into 4.12 OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-12914. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-12878. The following is the description of the original issue:
—
We want to add the dual-stack tests to the CNI plugin conformance test suite, for the currently supported releases.
(This has no impact on OpenShift itself. We're just modifying a test suite that OCP does not use.)
This is a clone of issue OCPBUGS-13335. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-13334. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-11753. The following is the description of the original issue:
—
Description of problem:
During the IPI bm provisioning process the deployment fails with the following error: Error: could not inspect: could not inspect node, node is currently 'inspect failed' , last error was 'Redfish exception occurred. Error: Failed to get network interface information on node 995431b0-14c8-4aae-9733-e02b42da29ac: The attribute EthernetInterfaces is missing from the resource /redfish/v1/Systems/<redacted>'
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Snippet from my install-config.yaml - name: ocp-test-2 role: master bmc: address: redfish-virtualmedia://<redacted>/redfish/v1/Systems/<redacted> username: admin password: <redacted> disableCertificateVerification: True bootMACAddress: <redacted>
Steps to Reproduce:
see above
Actual results:
Error: could not inspect: could not inspect node, node is currently 'inspect failed' , last error was 'Redfish exception occurred. Error: Failed to get network interface information on node 995431b0-14c8-4aae-9733-e02b42da29ac: The attribute EthernetInterfaces is missing from the resource /redfish/v1/Systems/<redacted>'
Expected results:
Because I already specified the bootMACAddress in the install-config.yaml I would like the installer to continue and don't fail when the Redfish API doesn't provide the EthernetInterfaces
Additional info:
The blades in this specific chassis don't have access to the NIC adapter. Therefore they can't get the MAC address and populate this field in the Redfish API. These blades are currently still selling and will need to be supported for another 5 years atleast. Our new servers in the new chassis don't have this problem (X-Series). Please keep the bootMACAddress as an optional field in the install-config.yaml to support these blades.
PROXY protocol cannot be enabled for the "Private" endpoint publishing strategy type.
4.10.0+. PROXY protocol was made configurable for the "HostNetwork" and "NodePortService" endpoint publishing strategy types, but not for "Private", in this release.
Always.
1. Create an ingresscontroller with the "Private" endpoint publishing strategy type:
oc create -f - <<'EOF'
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: example
namespace: openshift-ingress-operator
spec:
domain: example.com
endpointPublishingStrategy:
type: Private
private:
protocol: PROXY
EOF
2. Check the ingresscontroller's status:
oc -n openshift-ingress-operator get ingresscontrollers/example -o 'jsonpath={.status.endpointPublishingStrategy}'
3. Check whether the resulting router deployment has PROXY protocol enabled.
oc -n openshift-ingress get deployments/router-example -o 'jsonpath={.spec.template.spec.containers[0].env[?(@.name=="ROUTER_USE_PROXY_PROTOCOL")]}'
The ingresscontroller is created:
ingresscontroller.operator.openshift.io/example created
The status shows that the spec.endpointPublishingStrategy.private.protocol setting was ignored:
{"private":{},"type":"Private"}
The deployment does not enable PROXY protocol; the oc get command prints no output.
The ingresscontroller's status should indicate that PROXY protocol is enabled:
{"private":{"protocol":"PROXY"},"type":"Private"}
The deployment should have PROXY protocol enabled:
{"name":"ROUTER_USE_PROXY_PROTOCOL","value":"true"}
This bug report duplicates https://bugzilla.redhat.com/show_bug.cgi?id=2104481 in order to facilitate backports.
Description of problem:
There is a bug affecting verify steps functionality on iDRAC hardware in OpenShift 4.11 and 4.10. Original bug report has been made against 4.10: https://issues.redhat.com/browse/OCPBUGS-1740 While I am not aware of this issue being reported against 4.11, due to the fact that the fix is only present in 4.12 codebase, 4.11 versions will also be affected by this issue. This bug is created to meet automation requirements for backporting the fixes from 4.12 version to 4.11 (and then to 4.11 in the bug quoted above).
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
In order to delete the correct GCP cloud resources, the "--credentials-requests-dir" parameter must be passed to "ccoctl gcp delete". This was fixed for 4.12 as part of https://github.com/openshift/cloud-credential-operator/pull/489 but must be backported for previous releases. See https://github.com/openshift/cloud-credential-operator/pull/489#issuecomment-1248733205 for discussion regarding this bug.
To reproduce, create GCP infrastructure with a name parameter that is a subset of another set of GCP infrastructure's name parameter. I will "ccoctl gcp create all" with "name=abutcher-gcp" and "name=abutcher-gcp1".
$ ./ccoctl gcp create-all \ --name=abutcher-gcp \ --region=us-central1 \ --project=openshift-hive-dev \ --credentials-requests-dir=./credrequests $ ./ccoctl gcp create-all \ --name=abutcher-gcp1 \ --region=us-central1 \ --project=openshift-hive-dev \ --credentials-requests-dir=./credrequests
Running "ccoctl gcp delete --name=abutcher-gcp" will result in GCP infrastructure for both "abutcher-gcp" and "abutcher-gcp1" being deleted.
$ ./ccoctl gcp delete --name abutcher-gcp --project openshift-hive-dev 2022/10/24 11:30:06 Credentials loaded from file "/home/abutcher/.gcp/osServiceAccount.json" 2022/10/24 11:30:06 Deleted object .well-known/openid-configuration from bucket abutcher-gcp-oidc 2022/10/24 11:30:07 Deleted object keys.json from bucket abutcher-gcp-oidc 2022/10/24 11:30:07 OIDC bucket abutcher-gcp-oidc deleted 2022/10/24 11:30:09 IAM Service account abutcher-gcp-openshift-image-registry-gcs deleted 2022/10/24 11:30:10 IAM Service account abutcher-gcp-openshift-gcp-ccm deleted 2022/10/24 11:30:11 IAM Service account abutcher-gcp1-openshift-cloud-network-config-controller-gcp deleted 2022/10/24 11:30:12 IAM Service account abutcher-gcp-openshift-machine-api-gcp deleted 2022/10/24 11:30:13 IAM Service account abutcher-gcp-openshift-ingress-gcp deleted 2022/10/24 11:30:15 IAM Service account abutcher-gcp-openshift-gcp-pd-csi-driver-operator deleted 2022/10/24 11:30:16 IAM Service account abutcher-gcp1-openshift-ingress-gcp deleted 2022/10/24 11:30:17 IAM Service account abutcher-gcp1-openshift-image-registry-gcs deleted 2022/10/24 11:30:19 IAM Service account abutcher-gcp-cloud-credential-operator-gcp-ro-creds deleted 2022/10/24 11:30:20 IAM Service account abutcher-gcp1-openshift-gcp-pd-csi-driver-operator deleted 2022/10/24 11:30:21 IAM Service account abutcher-gcp1-openshift-gcp-ccm deleted 2022/10/24 11:30:22 IAM Service account abutcher-gcp1-cloud-credential-operator-gcp-ro-creds deleted 2022/10/24 11:30:24 IAM Service account abutcher-gcp1-openshift-machine-api-gcp deleted 2022/10/24 11:30:25 IAM Service account abutcher-gcp-openshift-cloud-network-config-controller-gcp deleted 2022/10/24 11:30:25 Workload identity pool abutcher-gcp deleted
This bug is a backport clone of [Bugzilla Bug 2034883](https://bugzilla.redhat.com/show_bug.cgi?id=2034883). The following is the description of the original bug:
—
Description of problem:
Situation (starting point):
Problem:
Version-Release number of MCO (Machine Config Operator) (if applicable):
4.7.21
Platform (AWS, VSphere, Metal, etc.): (not relevant)
Are you certain that the root cause of the issue being reported is the MCO (Machine Config Operator)?
(Y/N/Not sure): Y
How reproducible:
Always if the said conditions are met.
Steps to Reproduce:
1. Have some nodes not ready
2. Force a change that requires machine-config-daemon daemonset rollout (I think that changing proxy settings would work for this)
3. Wait until a new kube-apiserver-to-kubelet-client-ca is rolled out by kube-apiserver-operator
Actual results:
New kube-apiserver-to-kubelet-client-ca not forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca not deployed on nodes
Expected results:
kube-apiserver-to-kubelet-client-ca forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca deployed to nodes.
Additional info:
In comments
Description of problem:
During an upgrade from 4.10.0-0.nightly-2022-09-06-081345 -> 4.11.0-0.nightly-2022-09-06-074353 DNS operator stuck in progressing state with following error in dns-defalt pod 2022-09-07T00:47:35.959931289Z [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://172.30.0.1:443/version": dial tcp 172.30.0.1:443: i/o timeout must-gather: http://shell.lab.bos.redhat.com/~anusaxen/must-gather-136172-050448657.tar.gz
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-09-06-074353
How reproducible:
Intermittent
Steps to Reproduce:
1. Perform upgrade from 4.10->4.11 2. 3.
Actual results:
Upgrade unsuccessful due to API svc unreachability
Expected results:
Upgrade should be successful
Additional info:
$ omg get co | grep -v "True.*False.*False" NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.10.0-0.nightly-2022-09-06-081345 True True False 3h4m network 4.11.0-0.nightly-2022-09-06-074353 True True False 3h5m $ omg logs dns-default-5mqz4 -c dns -n openshift-dns /home/anusaxen/Downloads/must-gather-136208-318613238/must-gather.local.4033693973186012455/quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-d995e1608ba83a1f71b5a1c49705aa3cef38618e0674ee256332d1b0e2cb0e23/namespaces/openshift-dns/pods/dns-default-5mqz4/dns/dns/logs/current.log 2022-09-07T00:45:36.355161941Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:36.854403897Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:37.354367197Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:37.861427861Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:38.355095996Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:38.855063626Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:39.355170472Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:39.855296624Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:40.355481413Z [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server 2022-09-07T00:45:40.855455283Z [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API 2022-09-07T00:45:40.856166731Z .:5353 2022-09-07T00:45:40.856166731Z [INFO] plugin/reload: Running configuration SHA512 = 7c3db3182389e270fe3af971fef9e7157a34bde01eb750d58c2d4895a965ecb2e2d00df34594b6a819653088b6efa2590e748ce23f9e16cbb0f44d74a4fc3587 2022-09-07T00:45:40.856166731Z CoreDNS-1.9.2 2022-09-07T00:45:40.856166731Z linux/amd64, go1.18.4, 2022-09-07T00:46:05.957503195Z [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://172.30.0.1:443/version": dial tcp 172.30.0.1:443: i/o timeout 2022-09-07T00:47:25.444876534Z [INFO] SIGTERM: Shutting down servers then terminating 2022-09-07T00:47:25.444984367Z [INFO] plugin/health: Going into lameduck mode for 20s 2022-09-07T00:47:35.959931289Z [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://172.30.0.1:443/version": dial tcp 172.30.0.1:443: i/o timeout
This is a clone of issue OCPBUGS-1549. The following is the description of the original issue:
—
Description of problem:
The cluster-dns-operator does not reconcile the openshift-dns namespace, which has been exposed as an issue in 4.12 due to the requirement for the namespace to have pod-security labels. If a cluster has been incrementally updated from a version less than or equal to 4.9, the openshift-dns namespace will most likely not contain the required pod-security labels since the namespace was statically created when the cluster was installed with old namespace configuration.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always if cluster originally installed with v4.9 or less
Steps to Reproduce:
1. Install v4.9 2. Upgrade to v4.12 (incrementally if required for upgrade path) 3. openshift-dns namespace will be missing pod-security labels
Actual results:
"oc get ns openshift-dns -o yaml" will show missing pod-security labels: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c15,c0 openshift.io/sa.scc.supplemental-groups: 1000210000/10000 openshift.io/sa.scc.uid-range: 1000210000/10000 creationTimestamp: "2020-05-21T19:36:15Z" labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" name: openshift-dns resourceVersion: "3127555382" uid: 0fb4571e-952f-4bea-bc45-461beec54369 spec: finalizers: - kubernetes
Expected results:
pod-security labels should exist: labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged
Additional info:
Issue found in CI during upgrade
https://coreos.slack.com/archives/C03G7REB4JV/p1663676443155839
Discovered in the must gather kubelet_service.log from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade/1586093220087992320
It appears the guard pod names are too long, and being truncated down to where they will collide with those from the other masters.
From kubelet logs in this run:
❯ grep openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste kubelet_service.log Oct 28 23:58:55.693391 ci-op-3hj6pnwf-4f6ab-lv57z-master-1 kubenswrapper[1657]: E1028 23:58:55.693346 1657 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-1" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste" Oct 28 23:59:03.735726 ci-op-3hj6pnwf-4f6ab-lv57z-master-0 kubenswrapper[1670]: E1028 23:59:03.735671 1670 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-0" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste" Oct 28 23:59:11.168082 ci-op-3hj6pnwf-4f6ab-lv57z-master-2 kubenswrapper[1667]: E1028 23:59:11.168041 1667 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-2" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste"
This also looks to be happening for openshift-kube-scheduler-guard, kube-controller-manager-guard, possibly others.
Looks like they should be truncated further to make room for random suffixes in https://github.com/openshift/library-go/blame/bd9b0e19121022561dcd1d9823407cd58b2265d0/pkg/operator/staticpod/controller/guard/guard_controller.go#L97-L98
Unsure of the implications here, it looks a little scary.
Description of problem:
The storageclass "thin-csi" is created by vsphere-CSI-Driver-Operator, after deleting it manually, it should be re-created immediately.
Version-Release number of selected component (if applicable):
4.11.4
How reproducible:
Always
Steps to Reproduce:
1. Check storageclass in running cluster, thin-csi is present: $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE thin (default) kubernetes.io/vsphere-volume Delete Immediate false 41m thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 38m
2. Delete thin-csi storageclass: $ oc delete sc thin-csi storageclass.storage.k8s.io "thin-csi" deleted
3. Check storageclass again, thin-csi is not present: $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE thin (default) kubernetes.io/vsphere-volume Delete Immediate false 50m
4. Check vmware-vsphere-csi-driver-operator log: ...... I0909 03:47:42.172866 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662695014\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662695014\" (2022-09-09 02:43:34 +0000 UTC to 2023-09-09 02:43:34 +0000 UTC (now=2022-09-09 03:47:42.172853123 +0000 UTC))"I0909 03:49:38.294962 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOFI0909 03:49:38.295468 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOFI0909 03:49:38.295765 1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF
5. Only first time creating in vmware-vsphere-csi-driver-operator log: $ oc -n openshift-cluster-csi-drivers logs vmware-vsphere-csi-driver-operator-7cc6d44b5c-c8czw | grep -i "storageclass"I0909 03:46:31.865926 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-cluster-csi-drivers", Name:"vmware-vsphere-csi-driver-operator", UID:"9e0c3e2d-d403-40a1-bf69-191d7aec202b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'StorageClassCreated' Created StorageClass.storage.k8s.io/thin-csi because it was missing
Actual results:
The storageclass "thin-csi" could not be re-created after deleting
Expected results:
The storageclass "thin-csi" should be re-created after deleting
Additional info:
This is a clone of issue OCPBUGS-6661. The following is the description of the original issue:
—
Description of problem:
CRL list is capped at 1MB due to configmap max size. If multiple public CRLs are needed for ingress controller the CRL pem file will be over 1MB.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. Create CRL configmap with the following distribution points: Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl3.digicert.com/DigiCertGlobalG2TLSRSASHA2562020CA1-2.cr # curl -o DigiCertGlobalG2TLSRSASHA2562020CA1-2.crl http://crl3.digicert.com/DigiCertGlobalG2TLSRSASHA2562020CA1-2.crl # openssl crl -in DigiCertGlobalG2TLSRSASHA2562020CA1-2.crl -inform DER -out DigiCertGlobalG2TLSRSASHA2562020CA1-2.pem # du -bsh DigiCertGlobalG2TLSRSASHA2562020CA1-2.pem 604K DigiCertGlobalG2TLSRSASHA2562020CA1-2.pem I still need to find more intermediate CRLS to grow this.
Actual results:
2023-01-25T13:45:01.443Z ERROR operator.init controller/controller.go:273 Reconciler error {"controller": "crl", "object": {"name":"custom","namespace":"openshift-ingress-operator"}, "namespace": "openshift-ingress-operator", "name": "custom", "reconcileID": "d49d9b96-d509-4562-b3d9-d4fc315226c0", "error": "failed to ensure client CA CRL configmap for ingresscontroller openshift-ingress-operator/custom: failed to update configmap: ConfigMap \"router-client-ca-crl-custom\" is invalid: []: Too long: must have at most 1048576 bytes"}
Expected results:
First be able to create a configmap where data only accounted to the 1MB max (see additional info below for more details), second some way to compress or allow a large CRL list that would be larger than 1MB
Additional info:
Only using this CRL and it being only 600K still causes issue and it could be due to the `last-applied-configuration` annotation on the configmap. This is added since we do an apply operation (update) on the configmap. I am not sure if this is counting towards the 1MB max. https://github.com/openshift/cluster-ingress-operator/blob/release-4.10/pkg/operator/controller/crl/crl_configmap.go#L295 Not sure if we could just replace the configmap.
Description of problem:
Dummy bug that is needed to track backport of https://github.com/ovn-org/ovn-kubernetes/pull/2975/commits/816e30a1fbb5beb8b20fe3e96906285762dd8eb6 which is already merged in 4.12
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem: Issue described in following issue: https://github.com/openshift/multus-admission-controller/issues/40
Fixed in: https://github.com/openshift/cluster-network-operator/pull/1515
Version-Release number of selected component (if applicable): OCP 4.10
Official Red Hat tracker. Issue has been merged already.
OCPBUGS-1251 landed an admin-ack gate in 4.11.z to help admins prepare for Kubernetes 1.25 API removals which are coming in OpenShift 4.12. Poking around in a 4.12.0-ec.2 cluster where APIRemovedInNextReleaseInUse is firing:
$ oc --as system:admin adm must-gather -- /usr/bin/gather_audit_logs $ zgrep -h v1beta1/poddisruptionbudget must-gather.local.1378724704026451055/quay*/audit_logs/kube-apiserver/*.log.gz | jq -r '.verb + " " + (.user | .username + " " + (.extra["authentication.kubernetes.io/pod-name"] | tostr ing))' | sort | uniq -c parse error: Invalid numeric literal at line 29, column 6 28 watch system:serviceaccount:openshift-machine-api:cluster-autoscaler ["cluster-autoscaler-default-5cf997b8d6-ptgg7"]
Finding the source for that container:
$ oc --as system:admin -n openshift-machine-api get -o json pod cluster-autoscaler-default-5cf997b8d6-ptgg7 | jq -r '.status.containerStatuses[].image' quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed $ oc image info quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed | grep github SOURCE_GIT_URL=https://github.com/openshift/kubernetes-autoscaler io.openshift.build.commit.url=https://github.com/openshift/kubernetes-autoscaler/commit/1dac0311b9842958ec630273428b74703d51c1c9 io.openshift.build.source-location=https://github.com/openshift/kubernetes-autoscaler
Poking about in the source:
$ git clone --depth 30 --branch master https://github.com/openshift/kubernetes-autoscaler.git
$ cd kubernetes-autoscaler
$ find . -name vendor
./addon-resizer/vendor
./cluster-autoscaler/vendor
./vertical-pod-autoscaler/e2e/vendor
./vertical-pod-autoscaler/vendor
Lots of vendoring. I haven't checked to see how new the client code is in the various vendor packages. But the main issue seems to be the v1beta1 in:
$ git grep policy cluster-autoscaler/core cluster-autoscaler/utils | grep policy.*v1beta1 cluster-autoscaler/core/scaledown/actuation/actuator_test.go: policyv1beta1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/actuation/actuator_test.go: eviction := createAction.GetObject().(*policyv1beta1.Eviction) cluster-autoscaler/core/scaledown/actuation/drain.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/actuation/drain_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/legacy/legacy.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/legacy/wrapper.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/scaledown/scaledown.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/core/static_autoscaler_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/drain/drain.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/drain/drain_test.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/kubernetes/listers.go: policyv1 "k8s.io/api/policy/v1beta1" cluster-autoscaler/utils/kubernetes/listers.go: v1policylister "k8s.io/client-go/listers/policy/v1beta1"
The main change from v1beta1 to v1 involves spec.selector; I dunno if that's relevant to the autoscaler use-case or not.
Do we run autoscaler CI? I was poking around a bit, but did not find a 4.12 periodic excercising the autoscaler that might have turned up this alert and issue.
The issue found while testing HOSTEDCP-400 and HOSTEDCP-401.
Hypershift operator installed with flags:
--platform-monitoring=operator-only --enable-uwm-telemetry-remote-write=true --metrics-set=telemetry
Service monitors and pod monitors in the control plane:
[jiezhao@cube hypershift]$ oc get servicemonitor -n clusters-jz-test NAME AGE catalog-operator 45m cluster-version-operator 45m etcd 46m kube-apiserver 46m kube-controller-manager 45m monitor-multus-admission-controller 43m monitor-ovn-master-metrics 43m node-tuning-operator 45m olm-operator 45m openshift-apiserver 45m openshift-controller-manager 45m [jiezhao@cube hypershift]$ oc get podmonitor -n clusters-jz-test NAME AGE cluster-image-registry-operator 46m controlplane-operator 47m hosted-cluster-config-operator 46m ignition-server 47m
In OCP management web console, go to Observe->Targets:
1. Status of service monitor 'monitor-multus-admission-controller' is Down, error: Scraped failed: server returned HTTP status 401 Unauthorized. It doesn't have cluster id in target labels 2. Target of pod monitor 'cluster-image-registry-operator' is missing, not shown
This is a clone of issue OCPBUGS-7437. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5547. The following is the description of the original issue:
—
Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390
When creating a Knative Service and delete it again with enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the secret "$name-github-webhook-secret" is not deleted.
When the user tries to create the same Knative Service again this fails with an error:
An error occurred
secrets "nodeinfo-github-webhook-secret" already exists
Version-Release number of selected component (if applicable):
4.13
(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5548)
How reproducible:
Always
Steps to Reproduce:
Actual results:
Deleted resources:
Expected results:
Should also remove this resource
Additional info:
When delete the whole application all the resources are deleted correctly (and just once)!
This is a clone of issue OCPBUGS-11404. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-11333. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10690. The following is the description of the original issue:
—
Description of problem:
according to PR: https://github.com/openshift/cluster-monitoring-operator/pull/1824, startupProbe for UWM prometheus/platform prometheus should be 1 hour, but startupProbe for UWM prometheus is still 15m after enabled UWM, platform promethues does not have issue, startupProbe is increased to 1 hour
$ oc -n openshift-user-workload-monitoring get pod prometheus-user-workload-0 -oyaml | grep startupProbe -A20 startupProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 60 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3 ... $ oc -n openshift-monitoring get pod prometheus-k8s-0 -oyaml | grep startupProbe -A20 startupProbe: exec: command: - sh - -c - if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi failureThreshold: 240 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 3
Version-Release number of selected component (if applicable):
4.13.0-0.nightly-2023-03-19-052243
How reproducible:
always
Steps to Reproduce:
1. enable UWM, check startupProbe for UWM prometheus/platform prometheus 2. 3.
Actual results:
startupProbe for UWM prometheus is still 15m
Expected results:
startupProbe for UWM prometheus should be 1 hour
Additional info:
since startupProbe for platform prometheus is increased to 1 hour, and no similar bug for UWM prometheus, won't fix the issue is OK.
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
Description of problem:
In a 4.11 cluster with only openshift-samples enabled, the 4.12 introduced optional COs console and insights are installed. While upgrading to 4.12, CVO considers them to be disabled explicitly and skips reconciling them. So these COs are not upgraded to 4.12. Installed COs cannot be disabled, so CVO is supposed to implicitly enable them. $ oc get clusterversion -oyaml { "apiVersion": "config.openshift.io/v1", "kind": "ClusterVersion", "metadata": { "creationTimestamp": "2022-09-30T05:02:31Z", "generation": 3, "name": "version", "resourceVersion": "134808", "uid": "bd95473f-ffda-402d-8fe3-74f852a9d6eb" }, "spec": { "capabilities": { "additionalEnabledCapabilities": [ "openshift-samples" ], "baselineCapabilitySet": "None" }, "channel": "stable-4.11", "clusterID": "8eda5167-a730-4b39-be1d-214a80506d34", "desiredUpdate": { "force": true, "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "" } }, "status": { "availableUpdates": null, "capabilities": { "enabledCapabilities": [ "openshift-samples" ], "knownCapabilities": [ "Console", "Insights", "Storage", "baremetal", "marketplace", "openshift-samples" ] }, "conditions": [ { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Unable to retrieve available updates: currently reconciling cluster version 4.12.0-0.nightly-2022-09-28-204419 not found in the \"stable-4.11\" channel", "reason": "VersionNotFound", "status": "False", "type": "RetrievedUpdates" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Capabilities match configured spec", "reason": "AsExpected", "status": "False", "type": "ImplicitlyEnabledCapabilities" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Payload loaded version=\"4.12.0-0.nightly-2022-09-28-204419\" image=\"registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc\" architecture=\"amd64\"", "reason": "PayloadLoaded", "status": "True", "type": "ReleaseAccepted" }, { "lastTransitionTime": "2022-09-30T05:23:18Z", "message": "Done applying 4.12.0-0.nightly-2022-09-28-204419", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-09-30T07:05:42Z", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2022-09-30T07:41:53Z", "message": "Cluster version is 4.12.0-0.nightly-2022-09-28-204419", "status": "False", "type": "Progressing" } ], "desired": { "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "4.12.0-0.nightly-2022-09-28-204419" }, "history": [ { "completionTime": "2022-09-30T07:41:53Z", "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "startedTime": "2022-09-30T06:42:01Z", "state": "Completed", "verified": false, "version": "4.12.0-0.nightly-2022-09-28-204419" }, { "completionTime": "2022-09-30T05:23:18Z", "image": "registry.ci.openshift.org/ocp/release@sha256:5a6f6d1bf5c752c75d7554aa927c06b5ea0880b51909e83387ee4d3bca424631", "startedTime": "2022-09-30T05:02:33Z", "state": "Completed", "verified": false, "version": "4.11.0-0.nightly-2022-09-29-191451" } ], "observedGeneration": 3, "versionHash": "CSCJ2fxM_2o=" } } $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-28-204419 True False False 93m cloud-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h56m cloud-credential 4.12.0-0.nightly-2022-09-28-204419 True False False 3h59m cluster-autoscaler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m config-operator 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m console 4.11.0-0.nightly-2022-09-29-191451 True False False 3h45m control-plane-machine-set 4.12.0-0.nightly-2022-09-28-204419 True False False 117m csi-snapshot-controller 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m dns 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m etcd 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m image-registry 4.12.0-0.nightly-2022-09-28-204419 True False False 3h46m ingress 4.12.0-0.nightly-2022-09-28-204419 True False False 151m insights 4.11.0-0.nightly-2022-09-29-191451 True False False 3h48m kube-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m kube-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-scheduler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-storage-version-migrator 4.12.0-0.nightly-2022-09-28-204419 True False False 91m machine-api 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m machine-approver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m machine-config 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m monitoring 4.12.0-0.nightly-2022-09-28-204419 True False False 3h44m network 4.12.0-0.nightly-2022-09-28-204419 True False False 3h55m node-tuning 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m openshift-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-samples 4.12.0-0.nightly-2022-09-28-204419 True False False 116m operator-lifecycle-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m service-ca 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m storage 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.11 cluster with only openshift-samples enabled 2. Upgrade to 4.12 3.
Actual results:
The 4.12 introduced optional CO console and insights are not upgraded to 4.12
Expected results:
All the installed COs get upgraded
Additional info:
We're seeing a slight uptick in how long upgrades are taking[1][2]. We are not 100% sure of the cause, but it looks like it started with 4.11 rc.7. There's no obvious culprits in the diff[3].
Looking at some of the jobs, we are seeing the gaps between kube-scheduler being updated and then machine-api appear to take longer. Example job run[4] showing 10+ minutes waiting for it.
TRT had a debugging session, and we have two suggestions:
[1] https://search.ci.openshift.org/graph/metrics?metric=job%3Aduration%3Atotal%3Aseconds&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade
[2] https://sippy.dptools.openshift.org/sippy-ng/tests/4.12/analysis?test=Cluster%20upgrade.%5Bsig-cluster-lifecycle%5D%20cluster%20upgrade%20should%20complete%20in%2075.00%20minutes
[3] https://amd64.ocp.releases.ci.openshift.org/releasestream/4-stable/release/4.11.0-rc.7
[4] https://prow.ci.openshift.org/view/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-azure-sdn-upgrade/1556865989923049472
Description of problem:
release-4.11 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.24 branch. We should import them in our downstream fork.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-669. The following is the description of the original issue:
—
Description of problem:
This is an OCP clone of https://bugzilla.redhat.com/show_bug.cgi?id=2099794 In summary, NetworkManager reports the network as being up before the ipv6 address of the primary interface is ready and crio fails to bind to it.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-6671. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3228. The following is the description of the original issue:
—
While starting a Pipelinerun using UI, and in the process of providing the values on "Start Pipeline" , the IBM Power Customer (Deepak Shetty from IBM) has tried creating credentials under "Advanced options" with "Image Registry Credentials" (Authenticaion type). When the IBM Customer verified the credentials from Secrets tab (in Workloads) , the secret was found in broken state. Screenshot of the broken secret is attached.
The issue has been observed on OCP4.8, OCP4.9 and OCP4.10.
This is a clone of issue OCPBUGS-6517. The following is the description of the original issue:
—
Description of problem:
When the cluster is configured with Proxy the swift client in the image registry operator is not using the proxy to authenticate with OpenStack, so it's unable to reach the OpenStack API. This issue became evident since recently the support was added to not fallback to cinder in case swift is available[1].
[1]https://github.com/openshift/cluster-image-registry-operator/pull/819
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Deploy a cluster with proxy and restricted installation 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
See https://bugzilla.redhat.com/show_bug.cgi?id=2104275
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
AWS tagging - when applying user defined tags you cannot add more than 10
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4499. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-860. The following is the description of the original issue:
—
Description of problem:
In GCP, once an external IP address is assigned to master/infra node through GCP console, numbers of pending CSR from kubernetes.io/kubelet-serving is increasing, and the following error are reported: I0902 10:48:29.254427 1 controller.go:121] Reconciling CSR: csr-q7bwd I0902 10:48:29.365774 1 csr_check.go:157] csr-q7bwd: CSR does not appear to be client csr I0902 10:48:29.371827 1 csr_check.go:545] retrieving serving cert from build04-c92hb-master-1.c.openshift-ci-build-farm.internal (10.0.0.5:10250) I0902 10:48:29.375052 1 csr_check.go:188] Found existing serving cert for build04-c92hb-master-1.c.openshift-ci-build-farm.internal I0902 10:48:29.375152 1 csr_check.go:192] Could not use current serving cert for renewal: CSR Subject Alternate Name values do not match current certificate I0902 10:48:29.375166 1 csr_check.go:193] Current SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5], CSR SAN Values: [build04-c92hb-master-1.c.openshift-ci-build-farm.internal 10.0.0.5 35.211.234.95] I0902 10:48:29.375175 1 csr_check.go:202] Falling back to machine-api authorization for build04-c92hb-master-1.c.openshift-ci-build-farm.internal E0902 10:48:29.375184 1 csr_check.go:420] csr-q7bwd: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.375193 1 csr_check.go:205] Could not use Machine for serving cert authorization: IP address '35.211.234.95' not in machine addresses: 10.0.0.5 I0902 10:48:29.379457 1 csr_check.go:218] Falling back to serving cert renewal with Egress IP checks I0902 10:48:29.382668 1 csr_check.go:221] Could not use current serving cert and egress IPs for renewal: CSR Subject Alternate Names includes unknown IP addresses I0902 10:48:29.382702 1 controller.go:233] csr-q7bwd: CSR not authorized
Version-Release number of selected component (if applicable):
4.11.2
Steps to Reproduce:
1. Assign external IPs to master/infra node in GCP 2. oc get csr | grep kubernetes.io/kubelet-serving
Actual results:
CSRs are not approved
Expected results:
CSRs are approved
Additional info:
This issue is only happen in GCP. Same OpenShift installations in AWS do not have this issue. It looks like the CSR are created using external IP addresses once assigned. Ref: https://coreos.slack.com/archives/C03KEQZC1L2/p1662122007083059
Description of problem:
The IPI installation in some regions got bootstrap failure, and without any node available/ready.
Version-Release number of selected component (if applicable):
12-22 16:22:27.970 ./openshift-install 4.12.0-0.nightly-2022-12-21-202045 12-22 16:22:27.970 built from commit 3f9c38a5717c638f952df82349c45c7d6964fcd9 12-22 16:22:27.970 release image registry.ci.openshift.org/ocp/release@sha256:2d910488f25e2638b6d61cda2fb2ca5de06eee5882c0b77e6ed08aa7fe680270 12-22 16:22:27.971 release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try the IPI installation in the problem regions (so far tried and failed with ap-southeast-2, ap-south-1, eu-west-1, ap-southeast-6, ap-southeast-3, ap-southeast-5, eu-central-1, cn-shanghai, cn-hangzhou and cn-beijing)
Actual results:
Bootstrap failed to complete
Expected results:
Installation in those regions should succeed.
Additional info:
FYI the QE flexy-install job: https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/166672/ No any node available/ready, and no any operator available. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 30m Unable to apply 4.12.0-0.nightly-2022-12-21-202045: an unknown error has occurred: MultipleErrors $ oc get nodes No resources found $ oc get machines -n openshift-machine-api -o wide NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE jiwei-1222f-v729x-master-0 30m jiwei-1222f-v729x-master-1 30m jiwei-1222f-v729x-master-2 30m $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication baremetal cloud-controller-manager cloud-credential cluster-autoscaler config-operator console control-plane-machine-set csi-snapshot-controller dns etcd image-registry ingress insights kube-apiserver kube-controller-manager kube-scheduler kube-storage-version-migrator machine-api machine-approver machine-config marketplace monitoring network node-tuning openshift-apiserver openshift-controller-manager openshift-samples operator-lifecycle-manager operator-lifecycle-manager-catalog operator-lifecycle-manager-packageserver service-ca storage $ Mater nodes don't run for example kubelet and crio services. [core@jiwei-1222f-v729x-master-0 ~]$ sudo crictl ps FATA[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory" [core@jiwei-1222f-v729x-master-0 ~]$ The machine-config-daemon firstboot tells "failed to update OS". [jiwei@jiwei log-bundle-20221222085846]$ grep -Ei 'error|failed' control-plane/10.0.187.123/journals/journal.log Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:18 localhost ignition[867]: failed to fetch config: resource requires networking Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <info> [1671726259.0329] hostname: hostname: hostnamed not used as proxy creation failed with: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <warn> [1671726259.0464] sleep-monitor-sd: failed to acquire D-Bus proxy: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain ignition[891]: GET error: Get "https://api-int.jiwei-1222f.alicloud-qe.devcluster.openshift.com:22623/config/master": dial tcp 10.0.187.120:22623: connect: connection refused ...repeated logs omitted... Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-ctl[1888]: 2022-12-22T16:27:46Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-vswitchd[1888]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 dbus-daemon[1669]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1924]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1937]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[2037]: Error: Device '' not found. Dec 22 08:35:32 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:35:32.477770 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-910221290 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:56:06 jiwei-1222f-v729x-master-0 rpm-ostree[2288]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos failed: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: W1222 08:56:06.785425 2181 firstboot_complete_machineconfig.go:46] error: failed to update OS to quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511 : error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511: Warning: The unit file, source configuration file or drop-ins of rpm-ostreed.service changed on disk. Run 'systemctl daemon-reload' to reload units. Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: error: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:57:31 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:57:31.244684 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-4021566291 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:59:20 jiwei-1222f-v729x-master-0 systemd[2353]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2437]: Error: open default: no such file or directory Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2450]: Error: failed to start API service: accept unixgram @00026: accept4: operation not supported Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman-kube@default.service: Failed with result 'exit-code'. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: Failed to start A template for running K8s workloads via podman-play-kube. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman.service: Failed with result 'exit-code'. [jiwei@jiwei log-bundle-20221222085846]$
Description of problem:
This bugs purpose is to enable a feature backport of https://issues.redhat.com/browse/MON-1949.
This is a clone of issue OCPBUGS-5191. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5164. The following is the description of the original issue:
—
Description of problem:
It looks like the ODC doesn't register KNATIVE_SERVING and KNATIVE_EVENTING flags. Those are based on KnativeServing and KnativeEventing CRs, but they are looking for v1alpha1 version of those: https://github.com/openshift/console/blob/f72519fdf2267ad91cc0aa51467113cc36423a49/frontend/packages/knative-plugin/console-extensions.json#L6-L8
This PR https://github.com/openshift-knative/serverless-operator/pull/1695 moved the CRs to v1beta1, and that breaks that ODC discovery.
Version-Release number of selected component (if applicable):
Openshift 4.8, Serverless Operator 1.27
Additional info:
https://coreos.slack.com/archives/CHGU4P8UU/p1671634903447019
Description of problem:
Users search a resource (for example, Pod) with Name filter applied and input a text to the filter field then the search results filtered accordingly.
Once the results are shown, when the user clear the value in one-shot (i.e. select whole filter text from the field and clear it using delete or backspace key) from the field,
then the search result doesn't clear accordingly and the previous result stays on the page.
Version-Release number of selected components (if applicable):
4.11.0-0.nightly-2022-08-16-194731 & works fine with OCP 4.12 latest version.
How reproducible:
Always
Steps to Reproduce:
Actual results:
Search result doesn't clear when user clears name filter in one-shot for any resources.
Expected results:
Search results should clear when the user clears name filters in one-shot for any resources.
Additional info:
Reproduced in both chrome[103.0.5060.114 (Official Build) (64-bit)] and firefox[91.11.0esr (64-bit)] browsers.
Attached screen share for the same issue. SearchIssues.mp4
Description of problem:
This repository is out of sync with the downstream product builds for this component.
One or more images differ from those being used by ART to create product builds. This
should be addressed to ensure that the component's CI testing is accurately
reflecting what customers will experience.
The information within the following ART component metadata is driving this alignment
request: ose-baremetal-installer.yml.
The vast majority of these PRs are opened because a different Golang version is being
used to build the downstream component. ART compiles most components with the version
of Golang being used by the control plane for a given OpenShift release. Exceptions
to this convention (i.e. you believe your component must be compiled with a Golang
version independent from the control plane) must be granted by the OpenShift
architecture team and communicated to the ART team.
Version-Release number of selected component (if applicable): 4.11
Additional info: This issue is needed for the bot-created PR to merge after the 4.11 GA.
Node healthz server was added in 4.13 with https://github.com/openshift/ovn-kubernetes/commit/c8489e3ff9c321e77f265dc9d484ed2549df4a6b and https://github.com/openshift/ovn-kubernetes/commit/9a836e3a547f3464d433ce8b9eef336624d51858. We need to configure it by default on 0.0.0.0:10256 on CNO for ovnk, just like we do for sdn.
This is a clone of issue OCPBUGS-3021. The following is the description of the original issue:
—
Description of problem:
me-west1 is not listed in the survey
Steps to Reproduce:
1. run survey (openshift-install create install-config without an install config file) 2. go through prompts until regions 3.
Actual results:
me-west1 region is missing
Expected results:
me-west1 region is listed (and install succeeds in the region)
This is a clone of issue OCPBUGS-6816. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-6799. The following is the description of the original issue:
—
Description of problem:
The pipelines -> repositories list view in Dev Console does not show the running pipelineline as the last pipelinerun in the table.
Original BugZilla Link: https://bugzilla.redhat.com/show_bug.cgi?id=2016006
OCPBUGSM: https://issues.redhat.com/browse/OCPBUGSM-36408
ENV:
OCP 4.11.29
VSphere IPI
ISSUE:
If scaling up 3-5 machines the machines will transition to running, join the cluster.
If scaling up a large number of machines, 10-20+ some will transition to running and join the cluster.
The remaining will stay in provisioned/provisioning state and never power on
If those nodes are manually powered on they join the cluster and are healthy.
OBSERVATION:
There are ~16k tags shared across multiple VMWare Cloud Foundation data centers.
In the logs it is observed the reconciliation of tags occur and then some of the nodes will power on and transition to running.
Do not see in the logs where the reconciliation of the tags on the remaining nodes runs again or completes.
Description of problem:
If you set a services cluster IP to an IP with a leading zero (e.g. 192.168.0.011), ovn-k should normalise this and remove the leading zero before sending it to ovn.
This was seen by me on a CI run executing the k8 test here: test/e2e/network/funny_ips.go +75
you can reproduce using that above test.
Have a read of the text there:
43 // What are funny IPs: 44 // The adjective is because of the curl blog that explains the history and the problem of liberal 45 // parsing of IP addresses and the consequences and security risks caused the lack of normalization, 46 // mainly due to the use of different notations to abuse parsers misalignment to bypass filters. 47 // xref: https://daniel.haxx.se/blog/2021/04/19/curl-those-funny-ipv4-addresses/ 48 // 49 // Since golang 1.17, IPv4 addresses with leading zeros are rejected by the standard library. 50 // xref: https://github.com/golang/go/issues/30999 51 // 52 // Because this change on the parsers can cause that previous valid data become invalid, Kubernetes 53 // forked the old parsers allowing leading zeros on IPv4 address to not break the compatibility. 54 // 55 // Kubernetes interprets leading zeros on IPv4 addresses as decimal, users must not rely on parser 56 // alignment to not being impacted by the associated security advisory: CVE-2021-29923 golang 57 // standard library "net" - Improper Input Validation of octal literals in golang 1.16.2 and below 58 // standard library "net" results in indeterminate SSRF & RFI vulnerabilities. xref: 59 // https://nvd.nist.gov/vuln/detail/CVE-2021-29923
northd is logging an error about this also:
|socket_util|ERR|172.30.0.011:7180: bad IP address "172.30.0.011" ... 2022-08-23T14:14:21.968Z|01839|ovn_util|WARN|bad ip address or port for load balancer key 172.30.0.011:7180
Also, I see the error:
E0823 14:14:34.135115 3284 gateway_shared_intf.go:600] Failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip: failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip with svcVIP 172.30.0.011, svcPort 7180, protocol TCP: value "<nil>" passed to DeleteConntrack is not an IP address
We should normalise the IPs before sending to OVN-k. I see also theres conntrack error when trying to set this bad IP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. See above k8 test
Actual results:
Leading zero IP sent to OVN
Expected results:
No leading zero IP sent to OVN
Additional info:
This is a clone of issue OCPBUGS-11972. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-9956. The following is the description of the original issue:
—
Description of problem:
PipelineRun default template name has been updated in the backend in Pipeline operator 1.10, So we need to update the name in the UI code as well.
This is a clone of issue OCPBUGS-9464. The following is the description of the original issue:
—
Description of problem:
mtls connection is not working when using an intermetiate CA appart from the root CA, both with CRL defined.
The Intermediate CA Cert had a published CDP which directed to a CRL issued by the root CA.
The config map in the openshift-ingress namespace contains the CRL as issued by the root CA. The CRL issued by the Intermediate CA is not present since that CDP is in the user cert and so not in the bundle.
When attempting to connect using a user certificate issued by the Intermediate CA it fails with an error of unknown CA.
When attempting to connect using a user certificate issued by the to Root CA the connection is successful.
Version-Release number of selected component (if applicable):
4.10.24
How reproducible:
Always
Steps to Reproduce:
1. Configure CA and intermediate CA with CRL
2. Sign client certificate with the intermediate CA
3. Configure mtls in openshift-ingress
Actual results:
When attempting to connect using a user certificate issued by the Intermediate CA it fails with an error of unknown CA.
When attempting to connect using a user certificate issued by the to Root CA the connection is successful.
Expected results:
Be able to connect with client certificated signed by the intermediate CA
Additional info:
The path used by --rotated-pod-logs to gather the rotated pod logs from /var/log/pods node folder via /api/v1/nodes/${NODE}/proxy/logs/${LOG_PATH} is only valid for regular pods but not for static pods.
The main problem is that, while normal pods have their rotated logs at this /var/log/pods/${POD_NAME}_${POD_UID_IN_API}/${CONTAINER_NAME}, static pods have them at /var/log/pods/${POD_NAME}_${CONFIG_HASH}/${CONTAINER_NAME} because the UID cannot be known at the time that the static pod is born (because static pods are created by kubelet before registering them in the kube-apiserver, and UID is assigned by the kube-apiserver).
The visible results of that are:
4.10
Always if there are static pods.
1. oc adm inspect --rotated-pod-logs ns/openshift-etcd (or any other project with static pods).
error: errors occurred while gathering data: one or more errors occurred while gathering pod-specific data for namespace: openshift-etcd [one or more errors occurred while gathering container data for pod etcd-master-0.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-1.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-2.example.net: the server could not find the requested resource]
No errors like the ones above and rotated pod logs to be gathered, if present.
Despite being marked as experimental, this --rotated-pod-logs is used in must-gather, so this issue can be easily reproduced by just running a default must-gather. I focused on bare oc adm inspect reproducers for simplicity.
This is a clone of issue OCPBUGS-7474. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-6714. The following is the description of the original issue:
—
Description of problem:
Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46
a customer cluster was patched. It is an Openshift 4.10.46 cluster with SDN.
More description about issue is available in private comment below since it contains customer data.
This bug was initially created as a copy of
Bug #2096605
I am copying this bug because: the parent bug solved the validation aspect of diskType but now the description of diskType in
https://github.com/openshift/installer/blob/master/data/data/install.openshift.io_installconfigs.yaml#L2914-L2923
needs to be updated.
Version: 4.11.0-0.nightly-2022-06-06-201913
Platform: vSphere IPI
What happened?
1. If user inputs an invalid value for platform.vsphere.diskType in install-config.yaml file, there is no validation checking for diskType and doesn't exit with error, but continues the installation, which is not the same behavior as in 4.10.
After all vms are provisioned, I checked that the disk provision type is thick.
2. If user doesn't set platform.vsphere.diskType in install-config.yaml file, the default disk provision type is thick, but not the vSphere default storage policy. On VMC, the default policy is thin, so maybe the description of diskType should also need to be updated.
$ ./openshift-install explain installconfig.platform.vsphere.diskType
KIND: InstallConfig
VERSION: v1
RESOURCE: <string>
Valid Values: "","thin","thick","eagerZeroedThick"
DiskType is the name of the disk provisioning type, valid values are thin, thick, and eagerZeroedThick. When not specified, it will be set according to the default storage policy of vsphere.
What did you expect to happen?
validation for diskType
How to reproduce it (as minimally and precisely as possible)?
set diskType to invalid value in install-config.yaml and install the cluster
This is a clone of issue OCPBUGS-11844. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-5548. The following is the description of the original issue:
—
Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390
When creating a Deployment, DeploymentConfig, or Knative Service with enabled Pipeline, and then deleting it again with the enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the automatically created Pipeline is not deleted.
When the user tries to create the same resource with a Pipeline again this fails with an error:
An error occurred
secrets "nodeinfo-generic-webhook-secret" already exists
Version-Release number of selected component (if applicable):
4.13
(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5547)
How reproducible:
Always
Steps to Reproduce:
Actual results:
Case 1: Delete resources:
Case 2: Delete application:
Expected results:
Case 1: Delete resource:
Case 2: Delete application:
Additional info:
Description of problem:
Setting a telemeter proxy in the cluster-monitoring-config config map does not work as expected
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
the following KCS details steps to add a proxy.
The steps have been verified at 4.7 but do not work at 4.8, 4.9 or 4.10
https://access.redhat.com/solutions/6172402
When testing at 4.8, 4.9 and 4.10 the proxy setting where also nested under `telemeterClient`
which triggered a telemeter restart but the proxy setting do not get set in the deployment as they do in 4.7
Actual results:
4.8, 4.9 and 4.10 without the nested `telemeterClient`
does not trigger a restart of the telemeter pod
Expected results:
I think the proxy setting should be nested under telemeterClient
but should set the environment variables in the deployment
Additional info:
This is a backport of https://bugzilla.redhat.com/show_bug.cgi?id=2116382 from 4.12 to 4.11.z. Creating manually because as seen in https://github.com/openshift/cluster-monitoring-operator/pull/1743 `/cherry-pick` doesn't work for bugs originally created in bugzilla
This is a clone of issue OCPBUGS-8286. The following is the description of the original issue:
—
Description of problem:
mapi_machinehealthcheck_short_circuit is not properly reconciling the state, when a MachineHealthCheck is failing because of unhealthy Machines but then is removed. When doing two MachineSet (called blue and green and only one has running Machines at a specific point in time) with MachineAutoscaler and MachineHealthCheck, the mapi_machinehealthcheck_short_circuit will continue to report 1 for MachineHealth that actually was removed because of a switch from blue to green. $ oc get machineset | egrep 'blue|green' housiocp4-wvqbx-worker-blue-us-east-2a 0 0 2d17h housiocp4-wvqbx-worker-green-us-east-2a 1 1 1 1 2d17h $ oc get machineautoscaler NAME REF KIND REF NAME MIN MAX AGE worker-green-us-east-1a MachineSet housiocp4-wvqbx-worker-green-us-east-2a 1 4 2d17h $ oc get machinehealthcheck NAME MAXUNHEALTHY EXPECTEDMACHINES CURRENTHEALTHY machine-api-termination-handler 100% 0 0 worker-green-us-east-1a 40% 1 1 { "name": "machine-health-check-unterminated-short-circuit", "file": "/etc/prometheus/rules/prometheus-k8s-rulefiles-0/openshift-machine-api-machine-api-operator-prometheus-rules-ccb650d9-6fc4-422b-90bb-70452f4aff8f.yaml", "rules": [ { "state": "firing", "name": "MachineHealthCheckUnterminatedShortCircuit", "query": "mapi_machinehealthcheck_short_circuit == 1", "duration": 1800, "labels": { "severity": "warning" }, "annotations": { "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n", "summary": "machine health check {{ $labels.name }} has been disabled by short circuit for more than 30 minutes" }, "alerts": [ { "labels": { "alertname": "MachineHealthCheckUnterminatedShortCircuit", "container": "kube-rbac-proxy-mhc-mtrc", "endpoint": "mhc-mtrc", "exported_namespace": "openshift-machine-api", "instance": "10.128.0.58:8444", "job": "machine-api-controllers", "name": "worker-blue-us-east-1a", "namespace": "openshift-machine-api", "pod": "machine-api-controllers-779dcb8769-8gcn6", "service": "machine-api-controllers", "severity": "warning" }, "annotations": { "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n", "summary": "machine health check worker-blue-us-east-1a has been disabled by short circuit for more than 30 minutes" }, "state": "firing", "activeAt": "2022-12-09T15:59:25.1287541Z", "value": "1e+00" } ], "health": "ok", "evaluationTime": 0.000648129, "lastEvaluation": "2022-12-12T09:35:55.140174009Z", "type": "alerting" } ], "interval": 30, "limit": 0, "evaluationTime": 0.000661589, "lastEvaluation": "2022-12-12T09:35:55.140165629Z" }, As we can see above, worker-blue-us-east-1a is no longer available and active but rather worker-green-us-east-1a. But worker-blue-us-east-1a was there before the switch to green has happen and was actuall reporting some unhealthy Machines. But since it's now gone, mapi_machinehealthcheck_short_circuit should properly reconcile as otherwise this is a false/positive alert.
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.12.0-rc.3 (but is also seen on previous version)
How reproducible:
- Always
Steps to Reproduce:
1. Setup OpenShift Container Platform 4 on AWS for example 2. Create blue and green MachineSet with MachineAutoScaler and MachineHealthCheck 3. Have active Machines for blue only 4. Trigger unhealthy Machines in blue MachineSet 5. Switch to green MachineSet, by removing MachineHealthCheck, MachineAutoscaler and setting replicate of blue MachineSet to 0 6. Create green MachineHealthCheck, MachineAutoscaler and scale geen MachineSet to 1 7. Observe how mapi_machinehealthcheck_short_circuit continues to report unhealthy state for blue MachineHealthCheck which no longer exists.
Actual results:
mapi_machinehealthcheck_short_circuit reporting problematic MachineHealthCheck even though the faulty MachineHealthCheck does no longer exist.
Expected results:
mapi_machinehealthcheck_short_circuit to properly reconcile it's state and remove MachineHealthChecks that have been removed on OpenShift Container Platform level
Additional info:
It kind of looks like similar to the issue reported in https://bugzilla.redhat.com/show_bug.cgi?id=2013528 respectively https://bugzilla.redhat.com/show_bug.cgi?id=2047702 (although https://bugzilla.redhat.com/show_bug.cgi?id=2047702 may not be super relevant)
This is a clone of issue OCPBUGS-4311. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-4305. The following is the description of the original issue:
—
Description of problem:
Please add an option to DISABLE debug in ironic-api. Presently it is enabled by default and there is no way to disable it or reduce log level
https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3
Version-Release number of selected component (if applicable): none
How reproducible: Every time
Steps to Reproduce:
Please check source code here: https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3
It is enabled by default and there is no way to disable it or reduce log level
Actual results:
Please check Case: 03371411, the log file grew to 409 GB
Expected results: Need a way to disable debug
Additional info: Case 03371411. A cluster must gather and log file can be found in the case.
This is a clone of issue OCPBUGS-4897. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-2500. The following is the description of the original issue:
—
Description of problem:
When the Ux switches to the Dev console the topology is always blank in a Project that has a large number of components.
Version-Release number of selected component (if applicable):
How reproducible:
Always occurs
Steps to Reproduce:
1.Create a project with at least 12 components (Apps, Operators, knative Brokers) 2. Go to the Administrator Viewpoint 3. Switch to Developer Viewpoint/Topology 4. No components displayed 5. Click on 'fit to screen' 6. All components appear
Actual results:
Topology renders with all controls but no components visible (see screenshot 1)
Expected results:
All components should be visible
Additional info:
Description of problem:
When running node-density (245 pods/node) on a 120 node cluster, we see that there is a huge spike (~22s) in Avg pod-latency. When the spike occurs we see all the ovnkube-master pods go through a restart.
The restart happens because of (ovnkube-master pods)
2022-08-10T04:04:44.494945179Z panic: reflect: call of reflect.Value.Len on ptr Value
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-09-114621
How reproducible:
Steps to Reproduce:
1. Run node-density on a 120 node cluster
Actual results:
Spike observed in pod-latency graph ~22s
Expected results:
Steady pod-latency graph ~4s
Additional info:
This is a clone of issue OCPBUGS-12956. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-12910. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-12904. The following is the description of the original issue:
—
Description of problem:
In order to test proxy installations, the CI base image for OpenShift on OpenStack needs netcat.
Description of problem:
TO address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10622. The following is the description of the original issue:
—
Description of problem:
Unit test failing === RUN TestNewAppRunAll/app_generation_using_context_dir newapp_test.go:907: app generation using context dir: Error mismatch! Expected <nil>, got supplied context directory '2.0/test/rack-test-app' does not exist in 'https://github.com/openshift/sti-ruby' --- FAIL: TestNewAppRunAll/app_generation_using_context_dir (0.61s)
Version-Release number of selected component (if applicable):
How reproducible:
100
Steps to Reproduce:
see for example https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_oc/1376/pull-ci-openshift-oc-master-images/1638172620648091648
Actual results:
unit tests fail
Expected results:
TestNewAppRunAll unit test should pass
Additional info:
This is a clone of issue OCPBUGS-11989. The following is the description of the original issue:
—
Description of problem:
Customer reports that when trying to create an application using the "Import from Git" workflow, the "Create" button at the very bottom of the form stays inactive. You can observe the issue in the video shared via Google Drive here (timestamp 00:35): https://drive.google.com/file/d/1GEA_TF5vV_ai9YDMZ3uzwEwYKkp_CY8r/view?usp=sharing The customer can work around the issue by selecting another Import Strategy than "Builder Image" and then switching back to "Builder Image" (timestamp 00:49).
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.11.25
How reproducible:
Always
Steps to Reproduce:
1. Click the "+Add" button on the left menu 2. Enter a Git repository URL 3. Select "Bitbucket" as Git type 3a. If necessary, select a "Source Secret" 4. For "Import Strategy", select "Builder Image" and select one of the available images 5. In "Application" select "Create application" 6. For "Application Name" and "Name" insert any valid value
Actual results:
"Create" button at the bottom of the form is inactive and cannot be clicked. Changing the Import Strategy to something else and then back to "Builder Image" makes the button active.
Expected results:
Button is active after filling out all the required form fields
Additional info:
* Video of the issue provided: https://drive.google.com/file/d/1GEA_TF5vV_ai9YDMZ3uzwEwYKkp_CY8r/view?usp=sharing
This is a clone of issue OCPBUGS-78. The following is the description of the original issue:
—
Copied from an upstream issue: https://github.com/operator-framework/operator-lifecycle-manager/issues/2830
What did you do?
When attempting to reinstall an operator that uses conversion webhooks by
The resulting InstallPlan enters a failed state with message similar to
error validating existing CRs against new CRD's schema for "devworkspaces.workspace.devfile.io": error listing resources in GroupVersionResource schema.GroupVersionResource{Group:"workspace.devfile.io", Version:"v1alpha1", Resource:"devworkspaces"}: conversion webhook for workspace.devfile.io/v1alpha2, Kind=DevWorkspace failed: Post "https://devworkspace-controller-manager-service.test-namespace.svc:443/convert?timeout=30s": service "devworkspace-controller-manager-service" not found
When the original CSVs are deleted, the operator's main deployment and service are removed, but CRDs are left in-cluster. However, since the service/CA bundle/deployment that serve the conversion webhook are removed, conversion webhooks are broken at that point. Eventually this impacts garbage collection on the cluster as well.
This can be reproduced by installing the DevWorkspace Operator from the Red Hat catalog. (I can provide yamls/upstream images that reproduce as well, if that's helpful). It may be necessary to create a DevWorkspace in the cluster before deletion, e.g. by oc apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/samples/plain.yaml
What did you expect to see?
Operator is able to be reinstalled without removing CRDs and all instances.
What did you see instead? Under which circumstances?
It's necessary to completely remove the operator including CRDs. For our operator (DevWorkspace), this also makes uninstall especially complicated as finalizers are used (so CRDs cannot be deleted if the controller is removed, and the controller cannot be restored by reinstalling)
Environment
operator-lifecycle-manager version: 4.10.24
Kubernetes version information: Kubernetes Version: v1.23.5+012e945 (OpenShift 4.10.24)
Kubernetes cluster kind: OpenShift
This is a clone of issue OCPBUGS-501. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable): 4.10.16
How reproducible: Always
Steps to Reproduce:
1. Edit the apiserver resource and add spec.audit.customRules field
$ oc get apiserver cluster -o yaml
spec:
audit:
customRules:
2. Allow the kube-apiserver pods to rollout new revision.
3. Once the kube-apiserver pods are in new revision execute $ oc get dc
Actual results:
Error from server (InternalError): an error on the server ("This request caused apiserver to panic. Look in the logs for details.") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io)
Expected results: The command "oc get dc" should display the deploymentconfig without any error.
Additional info:
Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled
This is a clone of issue OCPBUGS-7373. The following is the description of the original issue:
—
Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000
Under some circumstances the static pod machinery fails to populate the node status in time to generate the correct env variables for ETCD_URL_HOST, ETCD_NAME etc. The pods that come up will fail to accept those variables.
This is particularly pronounced in SNO topologies, leading to installation failures.
The fix is to fail fast in the targetconfig/envvar controller to ensure the CEO goes degraded instead of silently failing on the rollout of an invalid static pod.
This was originally reported in BZ as https://bugzilla.redhat.com/show_bug.cgi?id=2046335
—
Description of problem:
The issue reported here https://bugzilla.redhat.com/show_bug.cgi?id=1954121 still occur (tested on OCP 4.8.11, the CU also verified that the issue can happen even with OpenShift 4.7.30, 4.8.17 and 4.9.11)
How reproducible:
Attach a NIC to a master node will trigger the issue
Steps to Reproduce:
1. Deploy an OCP cluster (I've tested it IPI on AWS)
2. Attach a second NIC to a running master node (in my case "ip-10-0-178-163.eu-central-1.compute.internal")
Actual results:
~~~
$ oc get node ip-10-0-178-163.eu-central-1.compute.internal -o json | jq ".status.addresses"
[
,
,
,
{ "address": "ip-10-0-178-163.eu-central-1.compute.internal", "type": "InternalDNS" }]
$ oc get co etcd
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
etcd 4.8.11 True False True 31h
$ oc get co etcd -o json | jq ".status.conditions[0]"
{ "lastTransitionTime": "2022-01-26T15:47:42Z", "message": "EtcdCertSignerControllerDegraded: [x509: certificate is valid for 10.0.178.163, not 10.0.187.247, x509: certificate is valid for ::1, 10.0.178.163, 127.0.0.1, ::1, not 10.0.187.247]", "reason": "EtcdCertSignerController_Error", "status": "True", "type": "Degraded" }~~~
Expected results:
To have the certificate valid also for the second IP (the newly created one "10.0.187.247")
Additional info:
Deleting the following secrets seems to solve the issue:
~~~
$ oc get secret n openshift-etcd | grep kubernetes.io/tls | grep ^etcd
etcd-client kubernetes.io/tls 2 61s
etcd-peer-ip-10-0-132-49.eu-central-1.compute.internal kubernetes.io/tls 2 61s
etcd-peer-ip-10-0-178-163.eu-central-1.compute.internal kubernetes.io/tls 2 61s
etcd-peer-ip-10-0-202-187.eu-central-1.compute.internal kubernetes.io/tls 2 60s
etcd-serving-ip-10-0-132-49.eu-central-1.compute.internal kubernetes.io/tls 2 60s
etcd-serving-ip-10-0-178-163.eu-central-1.compute.internal kubernetes.io/tls 2 59s
etcd-serving-ip-10-0-202-187.eu-central-1.compute.internal kubernetes.io/tls 2 59s
etcd-serving-metrics-ip-10-0-132-49.eu-central-1.compute.internal kubernetes.io/tls 2 58s
etcd-serving-metrics-ip-10-0-178-163.eu-central-1.compute.internal kubernetes.io/tls 2 59s
etcd-serving-metrics-ip-10-0-202-187.eu-central-1.compute.internal kubernetes.io/tls 2 58s
$ oc get secret n openshift-etcd | grep kubernetes.io/tls | grep ^etcd | awk '
' | xargs -I {} oc delete secret {} -n openshift-etcd
secret "etcd-client" deleted
secret "etcd-peer-ip-10-0-132-49.eu-central-1.compute.internal" deleted
secret "etcd-peer-ip-10-0-178-163.eu-central-1.compute.internal" deleted
secret "etcd-peer-ip-10-0-202-187.eu-central-1.compute.internal" deleted
secret "etcd-serving-ip-10-0-132-49.eu-central-1.compute.internal" deleted
secret "etcd-serving-ip-10-0-178-163.eu-central-1.compute.internal" deleted
secret "etcd-serving-ip-10-0-202-187.eu-central-1.compute.internal" deleted
secret "etcd-serving-metrics-ip-10-0-132-49.eu-central-1.compute.internal" deleted
secret "etcd-serving-metrics-ip-10-0-178-163.eu-central-1.compute.internal" deleted
secret "etcd-serving-metrics-ip-10-0-202-187.eu-central-1.compute.internal" deleted
$ oc get co etcd -o json | jq ".status.conditions[0]"
{ "lastTransitionTime": "2022-01-26T15:52:21Z", "message": "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found", "reason": "AsExpected", "status": "False", "type": "Degraded" }~~~
Description of problem:
For some reason, the LSP of a pod is not properly added to the port group where the ACL of a NetworkPolicy is applied. This results on the networkpolicy not being applied to the pod and communication not possible.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
Always with a concrete pod at customer environment.
Steps to Reproduce:
(not known exactly yet)
Actual results:
LSP not in port group. ACL not applied. Netpol not in effect.
Expected results:
LSP in port group. ACL applied. Netpol in effect.
Additional info:
Details in private comments, as they involve sensitive data. Deleting the pod does nothing, but it is possible that this has something to do with the pod being recreated with the same name (although the LSPs UUIDs are different in each incarnation).
This is a clone of issue OCPBUGS-4460. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-3164. The following is the description of the original issue:
—
During first bootstrap boot we need crio and kubelet on the disk, so we start release-image-pivot systemd task. However, its not blocking bootkube, so these two run in parallel.
release-image-pivot restarts the node to apply new OS image, which may leave bootkube in an inconsistent state. This task should run before bootkube
As discussed previously by email, customer support case 03211616 requests a means to use the latest patch version of a given X.Y golang via imagestreams, with the blocking issue being the lack of X.Y tags for the go-toolset containers on RHCC.
The latter has now been fixed with the latest version also getting a :1.17 tag, and the imagestream source has been modified accordingly, which will get picked up in 4.12. We can now fix this in 4.11.z by backporting this to the imagestream files bundled in cluster-samples-operator.
/cc Ian Watson Feny Mehta
Description of problem:
The last egressIP in IP capacity cannot be applied correctly
Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-11-27-164248
How reproducible:
Found this failure in automation case which used to pass
Steps to Reproduce:
In AWS, one worker node is with IP capacity "ipv4:14" $oc describe node ip-10-0-129-21.us-west-2.compute.internal | egrep -C 3 egress Annotations: cloud.network.openshift.io/egress-ipconfig: [{"interface":"eni-03820ba0eca427fbf","ifaddr":{"ipv4":"10.0.128.0/18"},"capacity":{"ipv4":14,"ipv6":15}}] csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0cb3ae15bf3cffd9f"} k8s.ovn.org/host-addresses: ["10.0.129.21"]
1. Label above node as egress node
2. Created 15 egressIP objects, each egressIP object with one egressIP.
Actual results:
13 egressIPs applied correctly , last one is in wrong status. % oc get egressip NAME EGRESSIPS ASSIGNED NODE ASSIGNED EGRESSIPS egressip-47208-0 10.0.135.200 ip-10-0-129-21.us-west-2.compute.internal 10.0.135.200 egressip-47208-1 10.0.162.178 ip-10-0-129-21.us-west-2.compute.internal 10.0.162.178 egressip-47208-10 10.0.144.46 ip-10-0-129-21.us-west-2.compute.internal 10.0.144.46 egressip-47208-11 10.0.191.91 ip-10-0-129-21.us-west-2.compute.internal 10.0.191.91 egressip-47208-12 10.0.133.215 ip-10-0-129-21.us-west-2.compute.internal 10.0.133.215 egressip-47208-13 10.0.174.207 ip-10-0-129-21.us-west-2.compute.internal 10.0.174.207 egressip-47208-14 10.0.176.224 ip-10-0-129-21.us-west-2.compute.internal 10.0.176.224 egressip-47208-2 10.0.184.114 ip-10-0-129-21.us-west-2.compute.internal 10.0.184.114 egressip-47208-3 10.0.167.224 ip-10-0-129-21.us-west-2.compute.internal 10.0.167.224 egressip-47208-4 10.0.187.148 ip-10-0-129-21.us-west-2.compute.internal 10.0.187.148 egressip-47208-5 10.0.184.109 ip-10-0-129-21.us-west-2.compute.internal 10.0.184.109 egressip-47208-6 10.0.155.208 egressip-47208-7 10.0.134.13 ip-10-0-129-21.us-west-2.compute.internal 10.0.134.13 egressip-47208-8 10.0.142.255 ip-10-0-129-21.us-west-2.compute.internal 10.0.142.255 egressip-47208-9 10.0.170.197 % oc get cloudprivateipconfig NAME AGE 10.0.133.215 113s 10.0.134.13 113s 10.0.135.200 113s 10.0.142.255 113s 10.0.144.46 113s 10.0.155.208 113s 10.0.162.178 113s 10.0.167.224 113s 10.0.174.207 113s 10.0.176.224 113s 10.0.184.109 113s 10.0.184.114 113s 10.0.187.148 113s 10.0.191.91 113s oc get cloudprivateipconfig 10.0.155.208 -o yaml apiVersion: cloud.network.openshift.io/v1 kind: CloudPrivateIPConfig metadata: annotations: k8s.ovn.org/egressip-owner-ref: egressip-47208-6 creationTimestamp: "2022-11-28T09:13:47Z" finalizers: - cloudprivateipconfig.cloud.network.openshift.io/finalizer generation: 1 name: 10.0.155.208 resourceVersion: "72869" uid: 0143a07b-0a30-4de8-bfd7-589ed0c3d7dc spec: node: ip-10-0-129-21.us-west-2.compute.internal status: conditions: - lastTransitionTime: "2022-11-28T09:16:51Z" message: 'Error processing cloud assignment request, err: <nil>' observedGeneration: 1 reason: CloudResponseError status: "False" type: Assigned node: ip-10-0-129-21.us-west-2.compute.internal
Expected results:
As IP capacity for this node is 14, so here should have 14 egress IP applied successfully. No cloudprivateipconfig IP in error status.
Additional info:
This is a clone of issue OCPBUGS-676. The following is the description of the original issue:
—
the machine approver isn't recognizing hostnames that use capital letters as valid even though DNS is case-insensitive
an example of this is in OHSS-14709:
I0822 19:04:51.587266 1 controller.go:114] Reconciling CSR: csr-vdtpv I0822 19:04:51.600941 1 csr_check.go:156] csr-vdtpv: CSR does not appear to be client csr I0822 19:04:51.603648 1 csr_check.go:542] retrieving serving cert from ip-100-66-119-117.ec2.internal (100.66.119.117:10250) I0822 19:04:51.604003 1 csr_check.go:181] Failed to retrieve current serving cert: dial tcp 100.66.119.117:10250: connect: connection refused I0822 19:04:51.604017 1 csr_check.go:201] Falling back to machine-api authorization for ip-100-66-119-117.ec2.internal E0822 19:04:51.604024 1 csr_check.go:392] csr-vdtpv: DNS name 'ip-100-66-119-117.tech-ace-maint-prd.aws.delta.com' not in machine names: ip-100-66-119-117.ec2.internal ip-100-66-119-117.ec2.internal ip-100-66-119-117.tech-ACE-maint-prd.aws.delta.com I0822 19:04:51.604033 1 csr_check.go:204] Could not use Machine for serving cert authorization: DNS name 'ip-100-66-119-117.tech-ace-maint-prd.aws.delta.com' not in machine names: ip-100-66-119-117.ec2.internal ip-100-66-119-117.ec2.internal ip-100-66-119-117.tech-ACE-maint-prd.aws.delta.com I0822 19:04:51.606777 1 controller.go:199] csr-vdtpv: CSR not authorized
This can be worked around by manually approving the CSR
The relevant line in the machine approver appears to be here: https://github.com/openshift/cluster-machine-approver/blob/master/pkg/controller/csr_check.go#L378
This is a clone of issue OCPBUGS-11218. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10950. The following is the description of the original issue:
—
"pipelines-as-code-pipelinerun-go" configMap is not been used for the Go repository while creating Pipeline Repository. "pipelines-as-code-pipelinerun-generic" configMap has been used.
Install Red Hat Pipeline operator
`pipelines-as-code-pipelinerun-generic` PipelineRun template has been shown on the overview page
`pipelines-as-code-pipelinerun-go` PipelineRun template should show on the overview page
4.13
Description of problem:
Intended to backport the corresponding https://bugzilla.redhat.com/show_bug.cgi?id=2095852 which has been fixed already for this version.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
When creating a pod with an additional network that contains a `spec.config.ipam.exclude` range, any address within the excluded range is still iterated while searching for a suitable IP candidate. As a result, pod creation times out when large exclude ranges are used.
Version-Release number of selected component (if applicable):
How reproducible:
with big exclude ranges, 100%
Steps to Reproduce:
1. create network-attachment-definition with a large range: $ cat <<EOF| oc apply -f - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-w-excludes spec: config: |- { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "ens3", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "fd43:01f1:3daa:0baa::/64", "exclude": [ "fd43:01f1:3daa:0baa::/100" ], "log_file": "/tmp/whereabouts.log", "log_level" : "debug" } } EOF 2. create a pod with the network attached: $ cat <<EOF|oc apply -f - apiVersion: v1 kind: Pod metadata: name: pod-with-exclude-range annotations: k8s.v1.cni.cncf.io/networks: nad-w-excludes spec: containers: - name: pod-1 image: openshift/hello-openshift EOF 3. check pod status, event log and whereabouts logs after a while: $ oc get pods NAME READY STATUS RESTARTS AGE pod-with-exclude-range 0/1 ContainerCreating 0 2m23s $ oc get events <...> 6m39s Normal Scheduled pod/pod-with-exclude-range Successfully assigned default/pod-with-exclude-range to <worker-node> 6m37s Normal AddedInterface pod/pod-with-exclude-range Add eth0 [10.129.2.49/23] from openshift-sdn 2m39s Warning FailedCreatePodSandBox pod/pod-with-exclude-range Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded $ oc debug node/<worker-node> - tail /host/tmp/whereabouts.log Starting pod/<worker-node>-debug ... To use host binaries, run `chroot /host` 2022-10-27T14:14:50Z [debug] Finished leader election 2022-10-27T14:14:50Z [debug] IPManagement: {fd43:1f1:3daa:baa::1 ffffffffffffffff0000000000000000} , <nil> 2022-10-27T14:14:59Z [debug] Used defaults from parsed flat file config @ /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.conf 2022-10-27T14:14:59Z [debug] ADD - IPAM configuration successfully read: {Name:macvlan-net Type:whereabouts Routes:[] Datastore:kubernetes Addresses:[] OmitRanges:[fd43:01f1:3daa:0baa::/80] DNS: {Nameservers:[] Domain: Search:[] Options:[]} Range:fd43:1f1:3daa:baa::/64 RangeStart:fd43:1f1:3daa:baa:: RangeEnd:<nil> GatewayStr: EtcdHost: EtcdUsername: EtcdPassword:********* EtcdKeyFile: EtcdCertFile: EtcdCACertFile: LeaderLeaseDuration:1500 LeaderRenewDeadline:1000 LeaderRetryPeriod:500 LogFile:/tmp/whereabouts.log LogLevel:debug OverlappingRanges:true SleepForRace:0 Gateway:<nil> Kubernetes: {KubeConfigPath:/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig K8sAPIRoot:} ConfigurationPath:PodName:pod-with-exclude-range PodNamespace:default} 2022-10-27T14:14:59Z [debug] Beginning IPAM for ContainerID: f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 2022-10-27T14:14:59Z [debug] Started leader election 2022-10-27T14:14:59Z [debug] OnStartedLeading() called 2022-10-27T14:14:59Z [debug] Elected as leader, do processing 2022-10-27T14:14:59Z [debug] IPManagement - mode: 0 / containerID:f4ffd0e07d6c1a2b6ffb0fa29910c795258792bb1a1710ff66f6b48fab37af82 / podRef: default/pod-with-exclude-range 2022-10-27T14:14:59Z [debug] IterateForAssignment input >> ip: fd43:1f1:3daa:baa:: | ipnet: {fd43:1f1:3daa:baa:: ffffffffffffffff0000000000000000} | first IP: fd43:1f1:3daa:baa::1 | last IP: fd43:1f1:3daa:baa:ffff:ffff:ffff:ffff
Actual results:
Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Expected results:
additional network gets attached to the pod
Additional info:
This is a clone of issue OCPBUGS-7445. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-7207. The following is the description of the original issue:
—
At some point in the mtu-migration development a configuration file was generated at /etc/cno/mtu-migration/config which was used as a flag to indicate to configure-ovs that a migration procedure was in progress. When that file was missing, it was assumed the migration procedure was over and configure-ovs did some cleaning on behalf of it.
But that changed and /etc/cno/mtu-migration/config is never set. That causes configure-ovs to remove mtu-migration information when the procedure is still in progress making it to use incorrect MTU values and either causing nodes to be tainted with "ovn.k8s.org/mtu-too-small" blocking the procedure itself or causing network disruption until the procedure is over.
However, this was not a problem for the CI job as it doesn't use the migration procedure as documented for the sake of saving limited time available to run CI jobs. The CI merges two steps of the procedure into one so that there is never a reboot while the procedure is in progress and hiding this issue.
This was probably not detected in QE as well for the same reason as CI.
This is a clone of issue OCPBUGS-13183. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10990. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10526. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable):
4.13.0-0.nightly-2023-03-17-161027
How reproducible:
Always
Steps to Reproduce:
1. Create a GCP XPN cluster with flexy job template ipi-on-gcp/versioned-installer-xpn-ci, then 'oc descirbe node' 2. Check logs for cloud-network-config-controller pods
Actual results:
% oc get nodes NAME STATUS ROLES AGE VERSION huirwang-0309d-r85mj-master-0.c.openshift-qe.internal Ready control-plane,master 173m v1.26.2+06e8c46 huirwang-0309d-r85mj-master-1.c.openshift-qe.internal Ready control-plane,master 173m v1.26.2+06e8c46 huirwang-0309d-r85mj-master-2.c.openshift-qe.internal Ready control-plane,master 173m v1.26.2+06e8c46 huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal Ready worker 162m v1.26.2+06e8c46 huirwang-0309d-r85mj-worker-b-5txgq.c.openshift-qe.internal Ready worker 162m v1.26.2+06e8c46 `oc describe node`, there is no related egressIP annotations % oc describe node huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal Name: huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n2-standard-4 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-central1 failure-domain.beta.kubernetes.io/zone=us-central1-a kubernetes.io/arch=amd64 kubernetes.io/hostname=huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal kubernetes.io/os=linux machine.openshift.io/interruptible-instance= node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=n2-standard-4 node.openshift.io/os_id=rhcos topology.gke.io/zone=us-central1-a topology.kubernetes.io/region=us-central1 topology.kubernetes.io/zone=us-central1-a Annotations: csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-qe/zones/us-central1-a/instances/huirwang-0309d-r85mj-worker-a-wsrls"} k8s.ovn.org/host-addresses: ["10.0.32.117"] k8s.ovn.org/l3-gateway-config: {"default":{"mode":"shared","interface-id":"br-ex_huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal","mac-address":"42:01:0a:00:... k8s.ovn.org/node-chassis-id: 7fb1870c-4315-4dcb-910c-0f45c71ad6d3 k8s.ovn.org/node-gateway-router-lrp-ifaddr: {"ipv4":"100.64.0.5/16"} k8s.ovn.org/node-mgmt-port-mac-address: 16:52:e3:8c:13:e2 k8s.ovn.org/node-primary-ifaddr: {"ipv4":"10.0.32.117/32"} k8s.ovn.org/node-subnets: {"default":["10.131.0.0/23"]} machine.openshift.io/machine: openshift-machine-api/huirwang-0309d-r85mj-worker-a-wsrls machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-bec5065070ded51e002c566a9c5bd16a machineconfiguration.openshift.io/desiredConfig: rendered-worker-bec5065070ded51e002c566a9c5bd16a machineconfiguration.openshift.io/desiredDrain: uncordon-rendered-worker-bec5065070ded51e002c566a9c5bd16a machineconfiguration.openshift.io/lastAppliedDrain: uncordon-rendered-worker-bec5065070ded51e002c566a9c5bd16a machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true % oc logs cloud-network-config-controller-5cd96d477d-2kmc9 -n openshift-cloud-network-config-controller W0320 03:00:08.981493 1 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0320 03:00:08.982280 1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock... E0320 03:00:38.982868 1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp: lookup api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com: i/o timeout E0320 03:01:23.863454 1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp: lookup api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com on 172.30.0.10:53: read udp 10.129.0.14:52109->172.30.0.10:53: read: connection refused I0320 03:02:19.249359 1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock I0320 03:02:19.250662 1 controller.go:88] Starting node controller I0320 03:02:19.250681 1 controller.go:91] Waiting for informer caches to sync for node workqueue I0320 03:02:19.250693 1 controller.go:88] Starting secret controller I0320 03:02:19.250703 1 controller.go:91] Waiting for informer caches to sync for secret workqueue I0320 03:02:19.250709 1 controller.go:88] Starting cloud-private-ip-config controller I0320 03:02:19.250715 1 controller.go:91] Waiting for informer caches to sync for cloud-private-ip-config workqueue I0320 03:02:19.258642 1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal to node workqueue I0320 03:02:19.258671 1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal to node workqueue I0320 03:02:19.258682 1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal to node workqueue I0320 03:02:19.351258 1 controller.go:96] Starting node workers I0320 03:02:19.351303 1 controller.go:102] Started node workers I0320 03:02:19.351298 1 controller.go:96] Starting secret workers I0320 03:02:19.351331 1 controller.go:102] Started secret workers I0320 03:02:19.351265 1 controller.go:96] Starting cloud-private-ip-config workers I0320 03:02:19.351508 1 controller.go:102] Started cloud-private-ip-config workers E0320 03:02:19.589704 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-1.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue E0320 03:02:19.615551 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-0.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue E0320 03:02:19.644628 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-2.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue E0320 03:02:19.774047 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-0.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue E0320 03:02:19.783309 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-1.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue E0320 03:02:19.816430 1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-2.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
Expected results:
EgressIP should work
Additional info:
It can be reproduced in 4.12 as well, not regression issue.
This is a clone of issue OCPBUGS-3824. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-2598. The following is the description of the original issue:
—
Description of problem:
Liveness probe of ipsec pods fail with large clusters. Currently the command that is executed in the ipsec container is ovs-appctl -t ovs-monitor-ipsec ipsec/status && ipsec status The problem is with command "ipsec/status". In clusters with high node count this command will return a list with all the node daemons of the cluster. This means that as the node count raises the completion time of the command raises too.
This makes the main command
ovs-appctl -t ovs-monitor-ipsec
To hang until the subcommand is finished.
As the liveness and readiness probe values are hardcoded in the manifest of the ipsec container herehttps//github.com/openshift/cluster-network-operator/blob/9c1181e34316d34db49d573698d2779b008bcc20/bindata/network/ovn-kubernetes/common/ipsec.yaml] the liveness timeout of the container probe of 60 seconds start to be insufficient as the node count list is growing. This resulted in a cluster with 170 + nodes to have 15+ ipsec pods in a crashloopbackoff state.
Version-Release number of selected component (if applicable):
Openshift Container Platform 4.10 but i think the same will be visible to other versions too.
How reproducible:
I was not able to reproduce due to an extreamely high amount of resources are needed and i think that there is no point as we have spotted the issue.
Steps to Reproduce:
1. Install an Openshift cluster with IPSEC enabled 2. Scale to 170+ nodes or more 3. Notice that the ipsec pods will start getting in a Crashloopbackoff state with failed Liveness/Readiness probes.
Actual results:
Ip Sec pods are stuck in a Crashloopbackoff state
Expected results:
Ip Sec pods to work normally
Additional info:
We have provided a workaround where CVO and CNO operators are scaled to 0 replicas in order for us to be able to increase the liveness probe limit to a value of 600 that recovered the cluster. As a next step the customer will try to reduce the node count and restore the default liveness timeout value along with bringing the operators back to see if the cluster will stabilize.
Description of problem:
The alertmanager pod is stuck on OCP 4.11 with OVN in container Creating State From oc describe alertmanager pod: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 16s (x459 over 17h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-managed-ocs-alertmanager-0_openshift-storage_3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc_0(88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78): error adding pod openshift-storage_alertmanager-managed-ocs-alertmanager-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-storage/alertmanager-managed-ocs-alertmanager-0/3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-storage/alertmanager-managed-ocs-alertmanager-0 88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78] [openshift
Version-Release number of selected component (if applicable):
OCP 4.11 with OVN
How reproducible:
100%
Steps to Reproduce:
1. Terminate the node on which alertmanager pod is running 2. pod will get stuck in container Creating state 3.
Actual results:
AlertManager pod is stuck in container Creating state
Expected results:
Alertmanager pod is ready
Additional info:
The workaround would be to terminate the alertmanager pod
This is a clone of issue OCPBUGS-7458. The following is the description of the original issue:
—
Description of problem:
- After upgrading to OCP 4.10.41, thanos-ruler-user-workload-1 in the openshift-user-workload-monitoring namespace is consistently being created and deleted. - We had to scale down the Prometheus operator multiple times so that the upgrade is considered as successful. - This fix is temporary. After some time it appears again and Prometheus operator needs to be scaled down and up again. - The issue is present on all clusters in this customer environment which are upgraded to 4.10.41.
Version-Release number of selected component (if applicable):
How reproducible:
N/A, I wasn't able to reproduce the issue.
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
Description of problem:
Create Loadbalancer type service within the OCP 4.11.x OVNKubernetes cluster to expose the api server endpoint, the service does not response for normal oc request. But some of them are working, like "oc whoami", "oc get --raw /api"
Version-Release number of selected component (if applicable):
4.11.8 with OVNKubernetes
How reproducible:
always
Steps to Reproduce:
1. Setup openshift cluster 4.11 on AWS with OVNKubernetes as the default network 2. Create the following service under openshift-kube-apiserver namespace to expose the api ---- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1800" finalizers: - service.kubernetes.io/load-balancer-cleanup name: test-api namespace: openshift-kube-apiserver spec: allocateLoadBalancerNodePorts: true externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack loadBalancerSourceRanges: - <my_ip>/32 ports: - nodePort: 31248 port: 6443 protocol: TCP targetPort: 6443 selector: apiserver: "true" app: openshift-kube-apiserver sessionAffinity: None type: LoadBalancer 3. Setup the DNS resolution for the access xxx.mydomain.com ---> <elb-auto-generated-dns> 4. Try to access the cluster api via the service above by updating the kubeconfig to use the custom dns name
Actual results:
No response from the server side. $ time oc get node -v8 I1025 08:29:10.284069 103974 loader.go:375] Config loaded from file: bmeng.kubeconfig I1025 08:29:10.294017 103974 round_trippers.go:420] GET https://rh-api.bmeng-ccs-ovn.3o13.s1.devshift.org:6443/api/v1/nodes?limit=500 I1025 08:29:10.294035 103974 round_trippers.go:427] Request Headers: I1025 08:29:10.294043 103974 round_trippers.go:431] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I1025 08:29:10.294052 103974 round_trippers.go:431] User-Agent: oc/openshift (linux/amd64) kubernetes/e40bd2d I1025 08:29:10.365119 103974 round_trippers.go:446] Response Status: 200 OK in 71 milliseconds I1025 08:29:10.365142 103974 round_trippers.go:449] Response Headers: I1025 08:29:10.365148 103974 round_trippers.go:452] Audit-Id: 83b9d8ae-05a4-4036-bff6-de371d5bec12 I1025 08:29:10.365155 1