Short answer: there is no way to have an efficient range query for two or more clustering columns in Cassandra.
Long answer: to understand the core reasons for this limitation, you should consider the way how Cassandra stores columns inside a single partition.
All rows inside partition in a single SSTable file are sorted according to their clustering keys, like for your schema:
2015-01-01 100
2015-01-01 200
2015-01-01 300
2015-01-02 100
2015-01-02 200
2015-01-02 300
2015-01-03 100
There is a way to read a single slice of data (by using range query like SELECT * FROM decimalrangeck WHERE segment='SEG1' AND date>='2015-01-01' AND date<'2015-01-03';
). For this query Cassandra will issue a single disk seek for rows because it definitely knows where the required data starts and ends (as it's sorted on disk).
You can even have a two-column slice query like SELECT * FROM decimalrangeck WHERE segment='SEG1' AND date>='2015-01-01' AND date<'2015-01-03' AND dec>=100 AND dec<=200
. I believe you'll expect these results:
2015-01-01 100
2015-01-01 200
2015-01-02 100
2015-01-02 200
But you'll get a slightly surprising output:
2015-01-01 100
2015-01-01 200
2015-01-01 300
2015-01-02 100
2015-01-02 200
The difference is a row 2015-01-01 300
. It appears in output for a reason: your query is split into two parts: the start of the slice (2015-01-01,100) and the end of the slice (2015-03-01, 100). After that Cassandra reads all the data in between these data points in a single disk seek.
The original range query for 2 clustering columns will require too many disk reads to complete. Such queries typically are considered too performance-unfriendly.