Bug 2035847 - Two dashes in the Cronjob / Job pod name
Summary: Two dashes in the Cronjob / Job pod name
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.10
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.10.0
Assignee: Maciej Szulik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-28 02:37 UTC by Anping Li
Modified: 2022-03-10 16:36 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-10 16:36:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:36:48 UTC

Description Anping Li 2021-12-28 02:37:03 UTC
Description of problem:
Two dashes(--) in the cron job pod name. Only one(-) is expected.


Version-Release number of selected component (if applicable):
Version: 4.10.0-0.nightly-2021-12-23-153012


How reproducible:
Always

Steps to Reproduce:
1. Deploy cluster logging.

2. Check the cronjob pod name
#oc get cronjob
NAME                     SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
elasticsearch-im-app     */15 * * * *   False     0        13m             15m
elasticsearch-im-audit   */15 * * * *   False     0        13m             15m
elasticsearch-im-infra   */15 * * * *   False     0        13m             15m

#oc get pods -l "logging-infra"="indexManagement" -o name
pod/elasticsearch-im-app-27344280--1-9pxtm
pod/elasticsearch-im-audit-27344280--1-wgq8v
pod/elasticsearch-im-infra-27344280--1-tv4lh

Expected results:
#oc get pods -l "logging-infra"="indexManagement" -o name
pod/elasticsearch-im-app-27344280-1-9pxtm
pod/elasticsearch-im-audit-27344280-1-wgq8v
pod/elasticsearch-im-infra-27344280-1-tv4lh

Comment 1 Anping Li 2021-12-28 02:44:24 UTC
same issue to the other cronjob pods.

$oc get pods --all-namespaces  |grep -E "\-\-"
openshift-logging                                  elasticsearch-im-app-27344310--1-btp8z                                0/1     Completed   0              12m
openshift-logging                                  elasticsearch-im-audit-27344310--1-c89xj                              0/1     Completed   0              12m
openshift-logging                                  elasticsearch-im-infra-27344310--1-6gfml                              0/1     Completed   0              12m
openshift-marketplace                              7ada5e2e41a35e52e10b2d3f71c73c0143ed80ab4bd73a31d17914--1-66q9x       0/1     Completed   0              60m
openshift-marketplace                              bd21de40ff29167bb62f859356cedcf8931e79fbf0345393972a14--1-h5zbj       0/1     Completed   0              60m
openshift-operator-lifecycle-manager               collect-profiles-27344280--1-rp2nc                                    0/1     Completed   0              42m
openshift-operator-lifecycle-manager               collect-profiles-27344295--1-68c8f                                    0/1     Completed   0              27m
openshift-operator-lifecycle-manager               collect-profiles-27344310--1-79t62                                    0/1     Completed   0              12m

Comment 2 Xingxing Xia 2021-12-28 03:12:59 UTC
oc create job pi-job --image=image-registry.openshift-image-registry.svc:5000/openshift/perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
oc get pod
pi-job--1-x2xhc        0/1     Completed   0          2m17s
From this output, the root cause is in Job.

Further, from https://kubernetes.io/docs/concepts/workloads/controllers/job/ , it says:
"
kubectl describe jobs/pi
  Normal  SuccessfulCreate  14m   job-controller  Created pod: pi-5rwd7

kubectl get pods
The output is similar to this:
pi-5rwd7
"

So, Job has regression bug.

Comment 3 Maciej Szulik 2022-01-03 17:16:54 UTC
This will be fixed when https://github.com/openshift/kubernetes/pull/1087 lands.

Comment 5 zhou ying 2022-01-30 04:49:36 UTC
Can't reproduce the issue now:

[root@localhost ~]# oc version --client
Client Version: 4.10.0-202201281850.p0.g7c299f1.assembly.stream-7c299f1

[root@localhost ~]# oc create job pi-job --image=image-registry.openshift-image-registry.svc:5000/openshift/perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
job.batch/pi-job created
[root@localhost ~]# oc get pods
NAME           READY   STATUS              RESTARTS   AGE
pi-job-wcd62   0/1     ContainerCreating   0          3s

Comment 8 errata-xmlrpc 2022-03-10 16:36:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.