Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PyTorch] Missing op aten::rnn_tanh #11827

Closed
cVladu opened this issue Jun 22, 2022 · 1 comment
Closed

[PyTorch] Missing op aten::rnn_tanh #11827

cVladu opened this issue Jun 22, 2022 · 1 comment

Comments

@cVladu
Copy link

cVladu commented Jun 22, 2022

I built a simple RNN network in PyTorch and tried to convert it using relay.frontend.from_pytorch interface

Expected behavior

The network is converted

Actual behavior

NotImplementedError: The following operators are not implemented: ['aten::rnn_tanh'] is raised

Environment

Operating system:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic

TVM version: 0.9.dev0
Steps to build the TVM were followed from: https://tvm.apache.org/docs/install/from_source.html -- no change to config.cmake file

Steps to reproduce

def test_RNN_torch(num_layers: int,
                                    bidirectional: bool,
                                    use_bias: bool,
                                    hidden_size: int,
                                    input_size: int,
                                    seq_len: int,
                                    batch_first: bool,
                                    batch_size: int):
    r''' 
    Args:
        num_layers (int): num_layers to be passed to torch.nn.RNN
        bidirectional (bool): whether to build bidirectional RNN or not
        use_bias (bool): whether to use bias or not
        hidden_size (int): hidden_size of RNN cells
        input_size (int): Input features
        seq_len (int): Timesteps in input data
        batch_first (bool): Whether batch dimension is first or second dimension in input tensor
        batch_size (int): Batch size of input. If 0, unbatched input will be fed to network
    '''

    if batch_first:
        input_shape = (batch_size, seq_len, input_size)
    else:
        input_shape = (seq_len, batch_size, input_size)
    pytorch_net = torch.nn.Sequential(
        torch.nn.RNN(input_size,
                     hidden_size,
                     batch_first=batch_first,
                     num_layers=num_layers,
                     bidirectional=bidirectional,
                     bias=use_bias)
    )

    scripted_model = torch.jit.trace(pytorch_net.eval(),
                                     torch.randn(input_shape))

    mod, params = relay.frontend.from_pytorch(scripted_model,
                                              [('input', input_shape)])
    mod = relay.transform.InferType()(mod)
    print(mod.astext())

if __name__ == "__main__":

    test_RNN_torch(1,
                   False,
                   True,
                   5,
                   5,
                   15,
                   True,
                   32)

full traceback:

Traceback (most recent call last):
  File "lstm_transform.py", line 477, in <module>
    test_RNN_torch(1,
  File "lstm_transform.py", line 469, in test_RNN_torch
    mod, params = relay.frontend.from_pytorch(scripted_model,
  File "/home/catalinvladu/Mambaforge/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/pytorch.py", line 4030, in from_pytorch
    converter.report_missing_conversion(op_names)
  File "/home/catalinvladu/Mambaforge/envs/tvm-build/lib/python3.8/site-packages/tvm/relay/frontend/pytorch.py", line 3236, in report_missing_conversion
    raise NotImplementedError(msg)
NotImplementedError: The following operators are not implemented: ['aten::rnn_tanh']

If this is not the correct place to raise the issue or if my expected result is wrong, please let me know

@yuanfz98
Copy link
Contributor

yuanfz98 commented Jun 23, 2022

Hello cVladu,

It is the correct place to raise the issue. :)

When you call

relay.frontend.from_pytorch

You are converting pytorch model (or jit traced model) to "tvm model". And if tvm doesn't have implemented the corresponding equivalent operator (in your case, aten::rnn_tanh), error occurs.

So you need contributors come to help. I can take a stab.

@masahi masahi changed the title [Bug] relay.frontend.from_pytorch fails for RNN networks [PyTorch] Missing op aten::rnn_tanh Jun 24, 2022
@masahi masahi closed this as completed Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants