Datastax connection exception when using beeline or hive2 jdbc driver (Tableau) -
i have installed datastax
enterprise 2.8 on dev vm (centos 7). install went through smoothly , single node cluster working great. when try connect cluster using beeline or hive2 jdbc driver error shown below. main aim connect tableau using datastax enterprise driver or spark sql driver.
error observed is:
error 2016-04-14 17:57:56,915 org.apache.thrift.server.tthreadpoolserver: error occurred during processing of message. java.lang.runtimeexception: org.apache.thrift.transport.ttransportexception: invalid status -128 @ org.apache.thrift.transport.tsaslservertransport$factory.gettransport(tsaslservertransport.java:219) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.server.tthreadpoolserver$workerprocess.run(tthreadpoolserver.java:269) ~[libthrift-0.9.3.jar:0.9.3] @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1145) [na:1.7.0_99] @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:615) [na:1.7.0_99] @ java.lang.thread.run(thread.java:745) [na:1.7.0_99] caused by: org.apache.thrift.transport.ttransportexception: invalid status -128 @ org.apache.thrift.transport.tsasltransport.sendandthrowmessage(tsasltransport.java:232) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.transport.tsasltransport.receivesaslmessage(tsasltransport.java:184) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.transport.tsaslservertransport.handlesaslstartmessage(tsaslservertransport.java:125) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.transport.tsasltransport.open(tsasltransport.java:271) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.transport.tsaslservertransport.open(tsaslservertransport.java:41) ~[libthrift-0.9.3.jar:0.9.3] @ org.apache.thrift.transport.tsaslservertransport$factory.gettransport(tsaslservertransport.java:216) ~[libthrift-0.9.3.jar:0.9.3] ... 4 common frames omitted error 2016-04-14 17:58:59,140 org.apache.spark.scheduler.cluster.sparkdeployschedulerbackend: application has been killed. reason: master removed our application: killed
my cassandra.yml config:
cluster_name: 'cluster1'
num_tokens: 256
hinted_handoff_enabled: true hinted_handoff_throttle_in_kb: 1024 max_hints_delivery_threads: 2
batchlog_replay_throttle_in_kb: 1024
authenticator: allowallauthenticator
authorizer: allowallauthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.murmur3partitioner
data_file_directories: - /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
disk_failure_policy: stop
commit_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider: - class_name: org.apache.cassandra.locator.simpleseedprovider parameters: - seeds: "10.33.1.124"
concurrent_reads: 32 concurrent_writes: 32 concurrent_counter_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.33.1.124
start_native_transport: true native_transport_port: 9042
start_rpc: true
rpc_address: 10.33.1.124
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000 tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
batch_size_warn_threshold_in_kb: 64
compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000 range_request_timeout_in_ms: 10000 write_request_timeout_in_ms: 2000 counter_write_request_timeout_in_ms: 5000 cas_contention_timeout_in_ms: 1000 truncate_request_timeout_in_ms: 60000 request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: com.datastax.bdp.snitch.dsesimplesnitch
dynamic_snitch_update_interval_in_ms: 100 dynamic_snitch_reset_interval_in_ms: 600000 dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.noscheduler
server_encryption_options: internode_encryption: none keystore: resources/dse/conf/.keystore keystore_password: cassandra truststore: resources/dse/conf/.truststore truststore_password: cassandra
client_encryption_options: enabled: false optional: false keystore: resources/dse/conf/.keystore keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
concurrent_counter_writes: 32
counter_cache_size_in_mb:
counter_cache_save_period: 7200
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
sstable_preemptive_open_interval_in_mb: 50
counter_write_request_timeout_in_ms: 5000
when connecting beeline, error:
dse beeline beeline version 0.12.0.11 apache hive beeline> !connect jdbc:hive2://10.33.1.124:10000 scan complete in 10ms connecting jdbc:hive2://10.33.1.124:10000 enter username jdbc:hive2://10.33.1.124:10000: cassandra enter password jdbc:hive2://10.33.1.124:10000: ********* error: invalid url: jdbc:hive2://10.33.1.124:10000 (state=08s01,code=0) 0: jdbc:hive2://10.33.1.124:10000> !connect jdbc:hive2://10.33.1.124:10000 connecting jdbc:hive2://10.33.1.124:10000 enter username jdbc:hive2://10.33.1.124:10000: enter password jdbc:hive2://10.33.1.124:10000: error: invalid url: jdbc:hive2://10.33.1.124:10000 (state=08s01,code=0) 1: jdbc:hive2://10.33.1.124:10000>
i see similar errors when connecting through tableau
well.
the jdbc driver connects sparksql thrift server. if not start it, cannot connect it.
dse spark-sql-thriftserver /users/russellspitzer/dse/bin/dse: usage: dse spark-sql-thriftserver <command> [spark sql thriftserver options] available commands: start start spark sql thriftserver stop stops spark sql thriftserver
Comments
Post a Comment