This is just a quick FYI post as I don't see this documented on the web elsewhere. As of now in all versions of Cassandra a gc_grace_seconds
setting of 0 will disable hinted handoff. Basically to avoid an edge case that could cause data to reappear in a cluster (Detailed in Jira CASSANDRA-5314) hints are stored with a TTL of gc_grace_seconds for the keyspace in question. A gc_grace_seconds setting of 0 will cause hints to TTL instantly and they will never be streamed off when a node comes back up.
Here's the code line:
cassandra/src/java/org/apache/cassandra/db/RowMutation.java:
124 /*
125 * determine the TTL for the hint RowMutation
126 * this is set at the smallest GCGraceSeconds for any of the CFs in the RM
127 * this ensures that deletes aren't "undone" by delivery of an old hint
128 */
129 public int calculateHintTTL()
130 {
131 int ttl = Integer.MAX_VALUE;
132 for (ColumnFamily cf : getColumnFamilies())
133 ttl = Math.min(ttl, cf.metadata().getGcGraceSeconds());
134 return ttl;
135 }
And the log lines:
INFO 03:00:48,578 Finished hinted handoff of 0 rows to endpoint /10.0.0.58
INFO 03:00:48,584 Finished hinted handoff of 0 rows to endpoint /10.0.0.59
INFO 03:00:48,589 Finished hinted handoff of 0 rows to endpoint /10.0.0.37
INFO 03:00:48,594 Finished hinted handoff of 0 rows to endpoint /10.0.0.36
INFO 03:00:48,599 Finished hinted handoff of 0 rows to endpoint /10.0.0.39
INFO 03:00:48,604 Finished hinted handoff of 0 rows to endpoint /10.0.0.38
INFO 03:00:48,608 Finished hinted handoff of 0 rows to endpoint /10.0.0.33
INFO 03:00:48,613 Finished hinted handoff of 0 rows to endpoint /10.0.0.32
INFO 03:00:48,617 Finished hinted handoff of 0 rows to endpoint /10.0.0.35
INFO 03:00:48,622 Finished hinted handoff of 0 rows to endpoint /10.0.0.34
INFO 03:00:48,627 Finished hinted handoff of 0 rows to endpoint /10.0.0.45
In a single DC no hints isn't a huge issue, if you are using QUORUM
for reads you'd end up fixing the missed writes, even if consistency is compromised some. In multi-dc with LOCAL_QUORUM
this is killer though, that data will never come across the WAN without a full repair. Yikes!