Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Time-Frequency Consistency Loss is not utilized #21

Open
xiaoyuan7 opened this issue Mar 28, 2023 · 10 comments
Open

Time-Frequency Consistency Loss is not utilized #21

xiaoyuan7 opened this issue Mar 28, 2023 · 10 comments

Comments

@xiaoyuan7
Copy link

xiaoyuan7 commented Mar 28, 2023

I noticed that the Time-Frequency Consistency Loss is not being utilized in your code. Could you please confirm whether this is intentional or not? And if it is not being used intentionally, could you please explain the reason behind it and its potential impact on the model's performance?
image

@1057699668
Copy link

Hello, I noticed this too. Therefore, I modified the loss function to use the time-frequency consistency loss, and the final experimental results obtained differed significantly from the paper. I hope the author can answer this doubt for us.

@yuyunannan
Copy link

Can you get good results from the other three experiments? How to set the parameters?

@1057699668
Copy link

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

@1057699668
Copy link

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

@yuyunannan
Copy link

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

@1057699668
Copy link

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

Perhaps only the author can answer these questions for us.

@zzj2404
Copy link

zzj2404 commented Apr 1, 2023

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

have you solve the problem of subset?

@1057699668
Copy link

1057699668 commented Apr 1, 2023 via email

@JohnLone00
Copy link

对不起,我还没有解决子集问题。也许作者只给出了 SleepEEG → Epilepsy 实验的正确设置。
……
------------------ 原始邮件 ---------------- 发件人: "mims-harvard /TFC-预训练" @.>; 发送时间: 2023年4月1日(星期六)晚上7:53 @.>; @.@.>; 主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21) I also tried pre-training and fine-tuning using other datasets, but it got bad performance. I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad have you solve the problem of subset? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

The author's code seems to have some problems, using the torch's API, TransformerEncoderLayer, in the backbone network, however it does not set the batch_first's to true, according to the author's data format, batch_size should be in the first place, and it does not seem reasonable to use TransformerEncoder for single channel time series input.

@maxxu05
Copy link

maxxu05 commented Apr 18, 2023

Yes this has also been mentioned in the other issue 19 as well. I agree that the single channel time-series input doesn't make sense, especially since the transformer is currently coded such that the "time" of the self-attention mechanism is actually the singular channel. In this way, the size of the self-attention mechanism is attending over is only 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants