Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue after upgrade elassandra from 6.2.3 to 6.8.4 #404

Open
serversteam opened this issue Apr 13, 2022 · 0 comments
Open

issue after upgrade elassandra from 6.2.3 to 6.8.4 #404

serversteam opened this issue Apr 13, 2022 · 0 comments

Comments

@serversteam
Copy link

serversteam commented Apr 13, 2022

I have upgraded ES version from 6.2.3 to 6.8.4 after upgrade all indices and cluster is RED. We have 3 nodes cluster and nodetool status showing nodes status UN.

Datacenter: dc1

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 192.168.. 124.02 GiB 256 ? ********************************** rack1
UN 192.168.. 115.82 GiB 256 ? ********************************** rack1
UN 192.168.. 120.9 GiB 256 ? ********************************** rack1

I am getting below error in logs

2022-04-13 12:10:36,675 WARN [elasticsearch[192.168..][masterService#updateTask][T#1]] MasterService.java:712 executeTasks org.elasticsearch.cluster.service.MasterService$$Lambda$2197/1556570788@46b2def6
java.lang.IllegalArgumentException: can't add node {192.168..}{f9ed38c8-b7e0-4944-a0fe-4af6b5c6eb84}{f9ed38c8-b7e0-4944-a0fe-4af6b5c6eb84}{192.168..}{192.168..:9300}{DISABLED}{dc=dc1, rack=rack1}, found existing node {192.168..}{2c19J9KBSlWuOA_iSwufGw}{2c19J9KBSlWuOA_iSwufGw}{192.168..}{192.168..:9300}{ALIVE}{rack=rack1, dc=dc1} with same address
at org.elasticsearch.cluster.node.DiscoveryNodes$Builder.add(DiscoveryNodes.java:649)
at org.elassandra.discovery.CassandraDiscovery$GossipCluster.nodes(CassandraDiscovery.java:237)
at org.elassandra.discovery.CassandraDiscovery.nodes(CassandraDiscovery.java:943)
at org.elassandra.discovery.CassandraDiscovery$RoutingTableUpdateTaskExecutor.execute(CassandraDiscovery.java:553)
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:704)
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:302)
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:227)
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:158)
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

curl http://192.168.*.*:9200/_cluster/health?pretty
{
"cluster_name" : "*",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 1162,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 0.0
}

java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~18.04-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)

Linux ip-192-168-*-*5.4.0-1060-aws #63~18.04.1-Ubuntu SMP Mon Nov 15 14:31:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

curl http://192.168.*.*:9200/test_2020_11_04/allocation/explain?pretty=true
{
"error" : {
"root_cause" : [
{
"type" : "remote_transport_exception",
"reason" : "[192.168..][192.168..:9300][indices:data/read/get[s]]"
}
],
"type" : "action_request_validation_exception",
"reason" : "Validation Failed: 1: test.allocation table does not exists;"
},
"status" : 400
}

PS: I haven't copied data from other node manually. I am upgrading existing node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant