Chapter 15. Cassandra



The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra’s support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages. The largest known Cassandra cluster has over 300 TB of data in over 400 machines.

 -- Apache Cassandra Homepage

The following sections outline the various ways in which Titan can be used in concert with Cassandra.


If you are using Cassandra 2.2 or higher you need to explicitly enable thrift so that Titan can connect to the cluster. Do so by running ./bin/nodetool enablethrift.

15.1. Local Server Mode


Cassandra can be run as a standalone database on the same local host as Titan and the end-user application. In this model, Titan and Cassandra communicate with one another via a localhost socket. Running Titan over Cassandra requires the following setup steps:

  1. Download Cassandra, unpack it, and set filesystem paths in conf/cassandra.yaml and conf/
  2. Start Cassandra by invoking bin/cassandra -f on the command line in the directory where Cassandra was unpacked. Read output to check that Cassandra started successfully.

    Now, you can create a Cassandra TitanGraph as follows
    TitanGraph g =
    set("storage.backend", "cassandra").
    set("storage.hostname", "").

In the Gremlin shell, you can not define the type of the variables conf and g. Therefore, simply leave off the type declaration.

15.2. Remote Server Mode


When the graph needs to scale beyond the confines of a single machine, then Cassandra and Titan are logically separated into different machines. In this model, the Cassandra cluster maintains the graph representation and any number of Titan instances maintain socket-based read/write access to the Cassandra cluster. The end-user application can directly interact with Titan within the same JVM as Titan.

For example, suppose we have a running Cassandra cluster where one of the machines has the IP address, then connecting Titan with the cluster is accomplished as follows (comma separate IP addresses to reference more than one machine):

TitanGraph graph =
set("storage.backend", "cassandra").
set("storage.hostname", "").

In the Gremlin shell, you can not define the type of the variables conf and g. Therefore, simply leave off the type declaration.

15.3. Remote Server Mode with Gremlin Server


Gremlin Server can be wrapped around each Titan instance defined in the previous subsection. In this way, the end-user application need not be a Java-based application as it can communicate with Gremlin Server as a client. This type of deployment is great for polyglot architectures where various components written in different languages need to reference and compute on the graph.

Start gremlin server using bin/ and then in an external session you can send gremlin commands over the wire:

:plugin use tinkerpop.server
:remote connect tinkerpop.server conf/remote.yaml
:> g.addV()

In this case, each Gremlin server would be configured to connect to the Cassandra cluster. The following shows the graph specific fragment of the Gremlin server configuration. Refer to Chapter 7, Titan Server for a complete example and more information on how to configure the server.

graphs: {
  g: conf/}
  - aurelius.titan

For more information about gremlin server see the latest Tinkerpop documentation

15.4. Titan Embedded Mode


Finally, Cassandra can be embedded in Titan, which means, that Titan and Cassandra run in the same JVM and communicate via in process calls rather than over the network. This removes the (de)serialization and network protocol overhead and can therefore lead to considerable performance improvements. In this deployment mode, Titan internally starts a cassandra daemon and Titan no longer connects to an existing cluster but is its own cluster.

To use Titan in embedded mode, simply configure embeddedcassandra as the storage backend. The configuration options listed below also apply to embedded Cassandra. In creating a Titan cluster, ensure that the individual nodes can discover each other via the Gossip protocol, so setup a Titan-with-Cassandra-embedded cluster much like you would a stand alone Cassandra cluster. When running Titan in embedded mode, the Cassandra yaml file is configured using the additional configuration option storage.conf-file, which specifies the yaml file as a full url, e.g. storage.conf-file = file:///home/cassandra.yaml.

When running a cluster with Titan and Cassandra embedded, it is advisable to expose Titan through the Gremlin Server so that applications can remotely connect to the Titan graph database and execute queries.

Note, that running Titan with Cassandra embedded requires GC tuning. While embedded Cassandra can provide lower latency query answering, its GC behavior under load is less predictable.

15.5. Cassandra Specific Configuration

Refer to Chapter 12, Configuration Reference for a complete listing of all Cassandra specific configuration options in addition to the general Titan configuration options.

When configuring Cassandra it is recommended to consider the following Cassandra specific configuration options:

  • read-consistency-level: Cassandra consistency level for read operations
  • write-consistency-level: Cassandra consistency level for write operations
  • replication-factor: The replication factor to use. The higher the replication factor, the more robust the graph database is to machine failure at the expense of data duplication. The default value should be overwritten for production system to ensure robustness. A value of 3 is recommended. This replication factor can only be set when the keyspace is initially created. On an existing keyspace, this value is ignored.
  • thrift.frame_size_mb: The maximum frame size to be used by thrift for transport. Increase this value when retrieving very large result sets. Only applicable when storage.backend=cassandrathrift
  • keyspace: The name of the keyspace to store the Titan graph in. Allows multiple Titan graphs to co-exist in the same Cassandra cluster.

For more information on Cassandra consistency levels and acceptable values, please refer to the Cassandra Thrift API. In general, higher levels are more consistent and robust but have higher latency.

15.6. Global Graph Operations

Titan over Cassandra supports global vertex and edge iteration. However, note that all these vertices and/or edges will be loaded into memory which can cause OutOfMemoryException. Use Chapter 32, Titan with TinkerPop’s Hadoop-Gremlin to iterate over all vertices or edges in large graphs effectively.

15.7. Deploying on Amazon EC2


Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.

 -- Amazon EC2

Follow these steps to setup a Cassandra cluster on EC2 and deploy Titan over Cassandra. To follow these instructions, you need an Amazon AWS account with established authentication credentials and some basic knowledge of AWS and EC2.

15.7.1. Setup Cassandra Cluster

These instructions for configuring and launching the DataStax Cassandra Community Edition AMI are based on the DataStax AMI Docs and focus on aspects relevant for a Titan deployment.

15.7.2. Setting up Security Group

  • Navigate to the EC2 Console Dashboard, then click on "Security Groups" under "Network & Security".
  • Create a new security group. Click Inbound. Set the "Create a new rule" dropdown menu to "Custom TCP rule". Add a rule for port 22 from source Add a rule for ports 1024-65535 from the security group members. If you don’t want to open all unprivileged ports among security group members, then at least open 7000, 7199, and 9160 among security group members. Tip: the "Source" dropdown will autocomplete security group identifiers once "sg" is typed in the box, so you needn’t have the exact value ready beforehand.

15.7.3. Launch DataStax Cassandra AMI

  • "Launch the DataStax AMI in your desired zone
  • On the Instance Details page of the Request Instances Wizard, set "Number of Instances" to your desired number of Cassandra nodes. Set "Instance Type" to at least m1.large. We recommend m1.large.
  • On the Advanced Instance Options page of the Request Instances Wizard, set the "as text" radio button under "User Data", then fill this into the text box:
--clustername [cassandra-cluster-name]
--totalnodes [number-of-instances]
--version community
--opscenter no

[number-of-instances] in this configuration must match the number of EC2 instances configured on the previous wizard page. [cassandra-cluster-name] can be any string used for identification. For example:

--clustername titan
--totalnodes 4
--version community
--opscenter no
  • On the Tags page of the Request Instances Wizard you can apply any desired configurations. These tags exist only at the EC2 administrative level and have no effect on the Cassandra daemons' configuration or operation.
  • On the Create Key Pair page of the Request Instances Wizard, either select an existing key pair or create a new one. The PEM file containing the private half of the selected key pair will be required to connect to these instances.
  • On the Configure Firewall page of the Request Instances Wizard, select the security group created earlier.
  • Review and launch instances on the final wizard page.

15.7.4. Verify Successful Instance Launch

  • SSH into any Cassandra instance node: ssh -i [your-private-key].pem ubuntu@[public-dns-name-of-any-cassandra-instance]
  • Run the Cassandra nodetool nodetool -h ring to inspect the state of the Cassandra token ring. You should see as many nodes in this command’s output as instances launched in the previous steps.

Note, that the AMI takes a few minutes to configure each instance. A shell prompt will appear upon successful configuration when you SSH into the instance.

15.7.5. Launch Titan Instances

Launch additional EC2 instances to run Titan which are either configured in Remote Server Mode or Remote Server Mode with Gremlin-Server as described above. You only need to note the IP address of one of the Cassandra cluster instances and configure it as the host name. The particular EC2 instance to run and the particular configuration depends on your use case.

15.7.6. Example Titan Instance on Amazon Linux AMI

  • Launch the Amazon Linux AMI in the same zone of the Cassandra cluster. Choose your desired EC2 instance type depending on the amount of resources you need. Use the default configuration options and select the same Key Pair and Security Group as for the Cassandra cluster configured in the previous step.
  • SSH into the newly created instance via ssh -i [your-private-key].pem ec2-user@[public-dns-name-of-the-instance]. You may have to wait a little for the instance to launch.
  • Download the current Titan distribution with wget and unpack the archive locally to the home directory. Start the gremlin shell to verify that Titan runs successfully. For more information on how to unpack Titan and start the gremlin shell, please refer to the Getting Started guide
  • Create a configuration file with vi and add the following lines::

    storage.backend = cassandra
    storage.hostname = [IP-address-of-one-Cassandra-EC2-instance]

You may add additional configuration options found on this page or in Chapter 12, Configuration Reference.

  • Start the gremlin shell again and type the following::

    gremlin> graph ='')

You have successfully connected this Titan instance to the Cassandra cluster and can start to operate on the graph.

15.7.7. Connect to Cassandra cluster in EC2 from outside EC2

Opening the usual Cassandra ports (9160, 7000, 7199) in the security group is not enough, because the Cassandra nodes by default broadcast their ec2-internal IPs, and not their public-facing IPs.

The resulting behavior is that you can open a Titan graph on the cluster by connecting to port 9160 on any Cassandra node, but all requests to that graph time out. This is because Cassandra is telling the client to connect to an unreachable IP.

To fix this, set the "broadcast-address" property for each instance in /etc/cassandra/cassandra.yaml to its public-facing IP, and restart the instance. Do this for all nodes in the cluster. Once the cluster comes back, nodetool reports the correct public-facing IPs to which connections from the local machine are allowed.

Changing the "broadcast-address" property allows you to connect to the cluster from outside ec2, but it might also mean that traffic originating within ec2 will have to round-trip to the internet and back before it gets to the cluster. So, this approach is only useful for development and testing.