In this step, I have 5 fluentd pods and 2 of them were OOMkilled and restart several times. [2022/03/24 04:20:26] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 161 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Changing it to Type _doc resolved the problem. [SERVICE] Flush 1 Daemon off Log_level info Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020 [INPUT] Name forward Listen 0.0.0.0 Port 24224 [INPUT] name cpu tag metrics_cpu [INPUT] name disk tag metrics_disk [INPUT] name mem tag metrics_memory [INPUT] name netif tag metrics_netif interface eth0 [FILTER] Name parser . Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records [2022/03/24 04:19:34] [debug] [upstream] KA connection #104 to 10.3.4.84:9200 has been assigned (recycled) FluentD or Collector pods are throwing errors similar to the following: 2022-01-28T05:59:48.087126221Z 2022-01-28 05:59:48 +0000 : [retry_default] failed to flush the buffer. Though I do not found the reason of OOM and flush chunks error, I decide to reallocate normal memory to fd pod. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"N-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [task] created task=0x7ff2f183a660 id=12 OK Hi @yangtian9999 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [out coro] cb_destroy coro_id=22 Match kube. This guide will help you check for common problems that cause the log " Failed to flush index " to appear. ): k3s 1.19.8, use docker-ce backend, 20.10.12. [2022/03/24 04:19:21] [debug] [outputes.0] task_id=1 assigned to thread #1 Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 11 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=11 attempts=2 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input chunk] update output instances with new chunk size diff=634 [2022/03/24 04:19:54] [ warn] [engine] failed to flush chunk '1-1648095560.205735907.flb', retry in 40 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"BeMmun8BI6SaBP9l_8rZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) You signed in with another tab or window. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192119.62045721.flb', retry in 18 seconds: task_id=13, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2021/02/23 10:15:04] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=85527545 watch_fd=11 [2021/02/23 10:15:17] [ warn] [engine] failed to flush chunk '1-1614075316.467653746.flb', retry in 6 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0) [2021/02/23 10:15:17] [ warn] [engine] failed to flush chunk '1-1614075316.380912397 . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. . Environment: Infrastructure: Kubernetes; Deployment tool: helm; Screenshots, promtail config, or terminal output Bug Report. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:22] [debug] [input chunk] update output instances with new chunk size diff=641 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 events: IN_ATTRIB Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:59] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=650 [2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled) Host 10.3.4.84 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104051102 watch_fd=9 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [outputes.0] task_id=5 assigned to thread #1 Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:00] [debug] [retry] re-using retry for task_id=2 attempts=4 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 7 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=661 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk all logs sent to es, and display at kibana. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 * "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. {"took":2217,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=15 attempts=2 From fluent-bit to es: [ warn] [engine] failed to flush chunk, https://github.com/fluent/fluent-bit/issues/4386.you. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=665 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/coredns-66c464876b-4g64d_kube-system_coredns-3081b7d8e172858ec380f707cf6195c93c8b90b797b6475fe3ab21820386fc0d.log, inode 67178299 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [retry] new retry created for task_id=14 attempts=1 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104226845 watch_fd=16 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Z-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [out coro] cb_destroy coro_id=9 [2022/03/24 04:19:38] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] task_id=4 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) {"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Only td-agent-bit RESTART helps. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711 What version? Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=695 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-7mwzw_argo_main-4a2ecde2fd5310129cdf3e3c7eacc17fc1ae0eb6b5e88bed0fdf8fd7fd1100f4.log Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] HTTP Status=200 URI=/_bulk I have also set Replace_Dots On. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=12 [2022/03/24 04:19:54] [debug] [retry] re-using retry for task_id=0 attempts=3 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [outputes.0] task_id=16 assigned to thread #1 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [task] created task=0x7ff2f183b1a0 id=18 OK Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [retry] new retry created for task_id=4 attempts=1 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"k-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104226845 removing file name /var/log/containers/hello-world-dsfcz_argo_wait-3a9bd9a90cc08322e96d0b7bcc9b6aeffd7e5e6a71754073ca1092db862fcfb7.log [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] HTTP Status=200 URI=/_bulk I had similar issues with failed to flush chunk in fluent-bit logs, and eventually figured out that the index I was trying to send logs to already had a _type set to doc, while fluent-bit was trying to send with _type set to _doc (which is the default). [2022/03/24 04:19:22] [debug] [retry] new retry created for task_id=2 attempts=1 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=656 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] task_id=1 assigned to thread #1 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/traefik-5dd496474-84cj4_kube-system_traefik-686ff216b0c3b70ad7c33ceddf441433ae1fbf9e01b3c57c59bab53e69304722.log, inode 34105409 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=14 attempts=2 Retry_Limit False. I used a Premium Block Blob storage account, but the account kind/SKU don't seem to matter. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/workflow-controller-bb7c78c7b-w2n5c_argo_workflow-controller-7f4797ff53352e50ff21cf9625ec02ffb226172a2a3ed9b0cee0cb1d071a2990.log, inode 34598688 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Bug Report Describe the bug When Fluent Bit 1.8.9 first restarts to apply configuration changes, we are seeing spamming errors in the log like: [2021/10/30 02:47:00] [ warn] [engine] failed to flush chunk '2372-1635562009.567200761.flb',. [2022/03/24 04:19:49] [debug] [out coro] cb_destroy coro_id=3 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log, inode 1885019 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=19 assigned to thread #0 next_retry=2019-01-27 19:00:14 -0500 error_class="ArgumentError" error="Data too big (189382 bytes), would create more than 128 chunks!" plugin_id="object:3fee25617fbc" Because of this cache memory increases and td-agent fails to send messages to graylog [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log Hi @yangtian9999. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=5 assigned to thread #1 . After Graylog was created. I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above. [SERVICE] Flush 5 Daemon Off Log_Level ${LOG_LEVEL} Parsers_File parsers.conf Plugins_File plugins.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name dummy Rate 1 Tag dummy.log [OUTPUT] Name stdout Match * [OUTPUT] Name kafka Match * Brokers ${BROKER_ADDRESS} Topics bit Timestamp_Key @timestamp Retry_Limit false # Specify the number of extra seconds to monitor a file once is . Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69464185 watch_fd=13 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"J-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall {"took":2250,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"-uMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] task_id=12 assigned to thread #1 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [task] created task=0x7ff2f1839760 id=4 OK "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"5eMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [input chunk] update output instances with new chunk size diff=634 Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. I'm using fluentd logging on k8s for application logging, we are handling 100M (around 400 tps) and getting this issue. [2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920934.181870214.flb', retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0), use helm to install helm-charts-fluent-bit-0.19.19. Bug Report Describe the bug Failed to flush chunks {"log":"[2021/05/04 03:56:19] [ warn] [engine] failed to flush chunk '107-1618921823.521467425.flb', retry in 508 seconds: task_id=170 input=tail.0 \u003e output=kafka.0 (out_id=0)\n","s. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [ warn] [engine] failed to flush chunk '1-1648192097.600252923.flb', retry in 14 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"aeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [retry] new retry created for task_id=8 attempts=1 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 events: IN_ATTRIB Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 file has been deleted: /var/log/containers/hello-world-89skv_argo_main-41261a71eea53f67b43c6e1b643d273e59fade2d8d16ee9f4d70e01766e5cc1d.log Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 removing file name /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"aOMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/metrics-server-7b4f8b595-v67pp_kube-system_metrics-server-e1e425c84b9462fb800c3655c86c1fd8320b98067c0f43305806cb81b7120b4c.log, inode 67182317 [2022/03/24 04:20:26] [debug] [retry] re-using retry for task_id=2 attempts=5 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=1167 Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header edited Jan 15, 2020 at 19:20. [2022/03/24 04:20:04] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=20 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scanning path /var/log/containers/.log [2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=13 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=7 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 Kubernetes? Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=3 assigned to thread #1 Please . [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 Thanks for your answers, took some time after holiday (everybody happy new year) to dive into fluent-bit errors. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=1083 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input chunk] update output instances with new chunk size diff=650 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"O-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [retry] re-using retry for task_id=2 attempts=4