Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code Issues #4

Open
theGirlwithMoon opened this issue Dec 26, 2023 · 3 comments
Open

Code Issues #4

theGirlwithMoon opened this issue Dec 26, 2023 · 3 comments

Comments

@theGirlwithMoon
Copy link

Hello!Can this code run? I have reported some errors running on pytorch and can not find a solution. Is there any special requirement other than the environment you specified?

@zhengbw0324
Copy link
Collaborator

@theGirlwithMoon
Hello, can you give me some more detailed information? Such as configuration and error log information.

@theGirlwithMoon
Copy link
Author

Here is my configuration:
pytorch 2.0.1
cpu
python3.8
And here is my error log:

D:\ProgramFiles\Anaconda\envs\pytorch\python.exe D:/code/python/ReSeq-main/ReSeq-main/main.py
25 Dec 18:10 INFO Parameters:
gpu_id=0
worker=10
use_gpu=True
seed=2020
state=INFO
reproducibility=True
data_path=./run/datasets\ask
checkpoint_dir=saved
show_progress=True
save_dataset=True
dataset_save_path=./saved/ask-dataset.pth
save_dataloaders=False
dataloaders_save_path=None
shuffle=True
epochs=300
train_batch_size=1024
learner=adam
learning_rate=0.001
train_neg_sample_num=1
eval_step=1
stopping_step=10
clip_grad_norm=None
weight_decay=1e-05
loss_decimal_place=4
require_pow=False
enable_amp=False
enable_scaler=False
transform=None
metrics=['Recall', 'Precision', 'MRR', 'NDCG']
topk=[5]
valid_metric=NDCG@5
valid_metric_bigger=True
eval_batch_size=20480
metric_decimal_place=4
n_layers=2
n_heads=2
embedding_size=64
n_factors=64
inner_size=256
hidden_dropout_prob=0.5
attn_dropout_prob=0.5
hidden_act=gelu
layer_norm_eps=1e-12
initializer_range=0.02
t_weight=2.0
kd_loss_weight=0.003
field_separator=
seq_separator=
USER_ID_FIELD=q_id
ITEM_ID_FIELD=a_id
RATING_FIELD=label
TIME_FIELD=timestamp
seq_len=None
LABEL_FIELD=label
threshold={'label': 0.5}
NEG_PREFIX=neg_
USER_LIST_LENGTH_FIELD=q_list_length
ITEM_LIST_LENGTH_FIELD=a_list_length
LIST_SUFFIX=_list
MAX_LIST_LENGTH=50
POSITION_FIELD=position_id
NEG_USERS_FIELD=neg_q
NEG_ITEMS_FIELD=neg_a
eval_neg_num=100
dataset=ask
model=ReSeq
eval_type=EvaluatorType.RANKING
eval_candidate_num=50
device=cpu
25 Dec 18:10 INFO Load filtered dataset from: [./saved/ask-dataset.pth]
25 Dec 18:10 INFO [Training]: train_batch_size = [1024]
25 Dec 18:10 INFO [Evaluation]: eval_batch_size = [20480]
25 Dec 18:10 INFO ReSeq(
(user_f_embedding): Embedding(6032, 64, padding_idx=0)
(item_f_embedding): Embedding(3417, 64, padding_idx=0)
(user_p_embedding): Embedding(6032, 64, padding_idx=0)
(item_p_embedding): Embedding(3417, 64, padding_idx=0)
(user_preference): Transformer(
(position_embedding): Embedding(51, 64)
(trm_encoder): TransformerEncoder(
(layer): ModuleList(
(0-1): 2 x TransformerLayer(
(multi_head_attention): MultiHeadAttention(
(query): Linear(in_features=64, out_features=64, bias=True)
(key): Linear(in_features=64, out_features=64, bias=True)
(value): Linear(in_features=64, out_features=64, bias=True)
(softmax): Softmax(dim=-1)
(attn_dropout): Dropout(p=0.5, inplace=False)
(dense): Linear(in_features=64, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(out_dropout): Dropout(p=0.5, inplace=False)
)
(feed_forward): FeedForward(
(dense_1): Linear(in_features=64, out_features=256, bias=True)
(dense_2): Linear(in_features=256, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
)
)
)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
(item_preference): Transformer(
(position_embedding): Embedding(51, 64)
(trm_encoder): TransformerEncoder(
(layer): ModuleList(
(0-1): 2 x TransformerLayer(
(multi_head_attention): MultiHeadAttention(
(query): Linear(in_features=64, out_features=64, bias=True)
(key): Linear(in_features=64, out_features=64, bias=True)
(value): Linear(in_features=64, out_features=64, bias=True)
(softmax): Softmax(dim=-1)
(attn_dropout): Dropout(p=0.5, inplace=False)
(dense): Linear(in_features=64, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(out_dropout): Dropout(p=0.5, inplace=False)
)
(feed_forward): FeedForward(
(dense_1): Linear(in_features=64, out_features=256, bias=True)
(dense_2): Linear(in_features=256, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
)
)
)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
(user_feature): Transformer(
(position_embedding): Embedding(51, 64)
(trm_encoder): TransformerEncoder(
(layer): ModuleList(
(0-1): 2 x TransformerLayer(
(multi_head_attention): MultiHeadAttention(
(query): Linear(in_features=64, out_features=64, bias=True)
(key): Linear(in_features=64, out_features=64, bias=True)
(value): Linear(in_features=64, out_features=64, bias=True)
(softmax): Softmax(dim=-1)
(attn_dropout): Dropout(p=0.5, inplace=False)
(dense): Linear(in_features=64, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(out_dropout): Dropout(p=0.5, inplace=False)
)
(feed_forward): FeedForward(
(dense_1): Linear(in_features=64, out_features=256, bias=True)
(dense_2): Linear(in_features=256, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
)
)
)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
(item_feature): Transformer(
(position_embedding): Embedding(51, 64)
(trm_encoder): TransformerEncoder(
(layer): ModuleList(
(0-1): 2 x TransformerLayer(
(multi_head_attention): MultiHeadAttention(
(query): Linear(in_features=64, out_features=64, bias=True)
(key): Linear(in_features=64, out_features=64, bias=True)
(value): Linear(in_features=64, out_features=64, bias=True)
(softmax): Softmax(dim=-1)
(attn_dropout): Dropout(p=0.5, inplace=False)
(dense): Linear(in_features=64, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(out_dropout): Dropout(p=0.5, inplace=False)
)
(feed_forward): FeedForward(
(dense_1): Linear(in_features=64, out_features=256, bias=True)
(dense_2): Linear(in_features=256, out_features=64, bias=True)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
)
)
)
(LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.5, inplace=False)
)
(rec_loss): BPRLoss()
(kd_loss): MSELoss()
)
Trainable parameters: 1631154
Train 0: 0%| | 0/53 [00:03<?, ?it/s]
Traceback (most recent call last):
File "D:/code/python/ReSeq-main/ReSeq-main/main.py", line 125, in
main_process(model=args.model, dataset=args.dataset, config_file_list=config_file_list)
File "D:/code/python/ReSeq-main/ReSeq-main/main.py", line 95, in main_process
best_valid_score, best_valid_result = trainer.fit(
File "D:\code\python\ReSeq-main\ReSeq-main\trainer.py", line 292, in fit
train_loss = self._train_epoch(
File "D:\code\python\ReSeq-main\ReSeq-main\trainer.py", line 141, in _train_epoch
for batch_idx, interaction in enumerate(iter_data):
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\site-packages\tqdm\std.py", line 1178, in iter
for obj in iterable:
File "D:\code\python\ReSeq-main\ReSeq-main\data\dataloader.py", line 67, in iter
res = super().iter()
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 441, in iter
return self._get_iterator()
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 1042, in init
w.start()
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'torch._C.Generator' object
Traceback (most recent call last):
File "", line 1, in
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\ProgramFiles\Anaconda\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Process finished with exit code 1

@zhengbw0324
Copy link
Collaborator

@theGirlwithMoon
Please try reducing the worker parameter

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants