Skip to content
This repository was archived by the owner on Feb 10, 2021. It is now read-only.

Kerberos // Failed to extract principal from ticket cache: Bad format in credentials cache #158

Open
mehd-io opened this issue Apr 20, 2018 · 5 comments

Comments

@mehd-io
Copy link

mehd-io commented Apr 20, 2018

Hi there,
We got a clustered kerberos with SSL, and I basically used the same keytab that I'm using for other services to try out hdfs3 like this :

from hdfs3 import HDFileSystem
conf={'hadoop.security.authentication': 'kerberos'}
ticket_path='/home/dazer/mykey.keytab'
hdfs = HDFileSystem(host='hdfs://myhost', port=8020,pars=conf,ticket_cache=ticket_path)

and got the error :

ConnectionError: Connection Failed: HdfsIOException: FileSystem: Failed to extract principal from ticket cache: Bad format in credentials cache (filename: /home/dazer/mykey.keytab)

Any clue ?
Thx!

@martindurant
Copy link
Member

A ticket cache is not the same as a keytab file. You should use the keytab file together withkinit to create a kerberos ticket, e.g., something like

kinit -k -t /home/dazer/mykey.keytab myprincipal@DOMAIN

and check what happened with klist, which will also tell you the location of the ticket cache (a directory).

@mehd-io
Copy link
Author

mehd-io commented Apr 23, 2018

Yeah you're right @martindurant sorry, just released that ! Got another problem now which is not really explicit :/ I'll investigate

/binaries/anaconda3/lib/python3.6/site-packages/hdfs3/core.py in __init__(self, host, port, connect, autoconf, pars, **kwargs)
     74 
     75         if connect:
---> 76             self.connect()
     77 
     78     def __getstate__(self):

/binaries/anaconda3/lib/python3.6/site-packages/hdfs3/core.py in connect(self)
    139         else:
    140             msg = ensure_string(_lib.hdfsGetLastError()).split('\n')[0]
--> 141             raise ConnectionError('Connection Failed: {}'.format(msg))
    142 
    143     def delegate_token(self, user=None):

ConnectionError: Connection Failed: Problem with callback handler

@martindurant
Copy link
Member

Never saw that one before :| This message is also coming from the c-library layer, not the python library.

@mehd-io
Copy link
Author

mehd-io commented Apr 23, 2018

Actually, i got more logs info via the python shell as below :

2018-04-23 14:04:29.744451, p28387, th140584994699008, INFO Retrying connect to server: "afzera:8020". Already tried 9 time(s)
^[2018-04-23 14:04:34.936077, p28387, th140584994699008, ERROR Failed to setup RPC connection to "afzera:8020" caused by:
RpcChannel.cpp: 840: Problem with callback handler
	@	Hdfs::Internal::UnWrapper<Hdfs::SafeModeException, Hdfs::SaslException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const*, int)
	@	Hdfs::Internal::UnWrapper<Hdfs::AccessControlException, Hdfs::SafeModeException, Hdfs::SaslException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const*, int)
	@	Hdfs::Internal::UnWrapper<Hdfs::UnsupportedOperationException, Hdfs::AccessControlException, Hdfs::SafeModeException, Hdfs::SaslException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const*, int)
	@	Hdfs::Internal::UnWrapper<Hdfs::RpcNoSuchMethodException, Hdfs::UnsupportedOperationException, Hdfs::AccessControlException, Hdfs::SafeModeException, Hdfs::SaslException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const*, int)
	@	Hdfs::Internal::UnWrapper<Hdfs::NameNodeStandbyException, Hdfs::RpcNoSuchMethodException, Hdfs::UnsupportedOperationException, Hdfs::AccessControlException, Hdfs::SafeModeException, Hdfs::SaslException, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing, Hdfs::Internal::Nothing>::unwrap(char const*, int)

I noticed that ERROR Failed to setup RPC connection
and as said @quasiben by #91
and here : https://github.com/Pivotal-Data-Attic/attic-c-hdfs-client/issues/53
It's apparently due to the fact that hadoop.rpc.protection is set to privacy

I tried to overwrite this by :

conf={'hadoop.security.authentication': 'kerberos','hadoop.rpc.protection':'authenticate'}
hdfs = HDFileSystem(host='hdfs://myhost', port=8020,pars=conf,ticket_cache=ticket_path)

but still not working, does actually this stuff get overwritten by the core-site.xml ? Didn't set any global env however...

PS : I'm also running on cdh cluster but can't change core-site.xml

@martindurant
Copy link
Member

To attempt to get "privacy" working, you could try installing https://anaconda.org/mdurant/libgsasl/1.8.1/download/linux-64/libgsasl-1.8.1-1.tar.bz2 explicitly - better do that in a clean environment. Note that you might also be interested in trying arrow's libhdfs native connector (no "3" in that name), which has closer integration with the java security infrastructure.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants