Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question for the results #6

Open
sousoul opened this issue Jul 25, 2021 · 15 comments
Open

question for the results #6

sousoul opened this issue Jul 25, 2021 · 15 comments

Comments

@sousoul
Copy link

sousoul commented Jul 25, 2021

hello, when I run the instruction as below to get the results for video timelapse in datsest1

python obs_parima.py -D 1 -T timelapse --fps 30 -O 0 --fpsfrac 1.0 -Q 1080p

I got the reulst as follows:

======= RESULTS ============
PARIMA
Dataset: 1
Topic: timelapse
Pred nframe: 30.0
Avg. QoE: 255.55823261464883
Avg. Manhattan Error: 0.8833892109817905
Avg. Matrix Error: 0.9158664463334143
Avg. X_MAE: 197.75282207603703
Avg. Y_MAE: 90.24902002462285

the Manhattan Error of which is different with table1 in the paper
截屏2021-07-25 下午10 31 31

I don't konw why I got this different result, maybe I use the code in a wrong way?

@sousoul
Copy link
Author

sousoul commented Aug 21, 2021

I print the results by uncommenting the code from line 155 to line 159 in parima.py,

		######################################################
		# print("x: "+str(x_act))
		# print("x_pred: "+str(x_pred))
		# print("y: "+str(y_act))	
		# print("y_pred: "+str(y_pred))
		# print("("+str(actual_tile_row)+","+str(actual_tile_col)+"),("+str(pred_tile_row)+","+str(pred_tile_col)+")")
		# # ######################################################

and find that the prediction is always located in (0, 0) (1, 0) (1, 1)tiles , the results are like belows:

x: 161
x_pred: 625.5722775570365
y: 155
y_pred: 306.84832202756934
(0,0),(1,1)
x: 161
x_pred: 625.6430497567117
y: 155
y_pred: 306.7216460505056
(0,0),(1,1)
x: 170
x_pred: 625.9193381905325
y: 158
y_pred: 306.90741257262346
(0,0),(1,1)
x: 178
x_pred: 625.8357150417579
y: 160
y_pred: 306.7751003190363
(0,0),(1,1)
x: 178
x_pred: 625.7140908126182
y: 160
y_pred: 306.78683072215523
(0,0),(1,1)

@sarthak-chakraborty
Copy link
Owner

I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form.

@sousoul
Copy link
Author

sousoul commented Aug 22, 2021

I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form.

I change head_orientation_lib.H and head_orientation_lib.W in head_orientation_lib.py as follows, which is user to convert the quaternions to its equirectangular form in get_view.py

H = 2048
W = 3840

And then I convert quaternion to equirectangular form again:

python get_viewport.py -D 1 -T timelapse --fps 30

then I run the code of parima again:

python obs_parima.py -D 1 -T timelapse --fps 30 -O 0 --fpsfrac 1.0 -Q 1080p

But I still cannot get the desired results:

======= RESULTS ============
PARIMA
Dataset: 1
Topic: timelapse
Pred nframe: 30.0
Avg. QoE: 83.27577715398802
Avg. Manhattan Error: 3.018615430035326
Avg. Matrix Error: 1.2081570821086087
Avg. X_MAE: 693.0926154421804
Avg. Y_MAE: 370.38163820352423

Maybe I have ignored some details when run the code?

@sousoul
Copy link
Author

sousoul commented Aug 23, 2021 via email

@junhua-l
Copy link

junhua-l commented Nov 8, 2021

I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form.

Thank for your reply. But how can I change ?

@junhua-l
Copy link

junhua-l commented Nov 8, 2021

Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.

2021年8月22日 上午3:55,Sarthak Chakraborty @.***> 写道: I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q.

Hello, I have a similar question with you. I change the H and W but still get different results in dataset2. And could you please tell me where you download the dataset1? The link is under 404 error. Did you do anything else to get similar results?

@sousoul
Copy link
Author

sousoul commented Nov 13, 2021

Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.

2021年8月22日 上午3:55,Sarthak Chakraborty @.***> 写道: I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q.

Hello, I have a similar question with you. I change the H and W but still get different results in dataset2. And could you please tell me where you download the dataset1? The link is under 404 error. Did you do anything else to get similar results?

  1. 你说的问题是H和W怎么设置吗?dataset2我好像没测试,你是怎么设置的H,W?结果差了多少?
  2. dataset1 can be downloaded from https://dl.acm.org/do/10.1145/3193701/abs/

@sarthak-chakraborty
Copy link
Owner

Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.

2021年8月22日 上午3:55,Sarthak Chakraborty @.***> 写道: I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q.

The H and W in the head_orientation_lib is used to get the pixel in the equirectangular frame corresponding to the quaternion. Hence, to get the appropriate pixel representation, the range needs to be selected such that it falls within the frame size of the equirectangular image. Hence the H and W needs to be set equal to the height and width of the corresponding equirectangular frame.

@liuyingjie0329
Copy link

Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.

2021年8月22日 上午3:55,Sarthak Chakraborty @.***> 写道: I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q.

You mean that H and W in the program are not the height and width of each frame of the video, which are users viewport size?

@sarthak-chakraborty
Copy link
Owner

H and W are the height and width of the complete equirectangular frame.

@liuyingjie0329
Copy link

H and W are the height and width of the complete equirectangular frame.

In the program, H and W are 360 and 720, respectively. But I obtain height and weight from the equirectangular frame are 1280 and 2560.

@sarthak-chakraborty
Copy link
Owner

Kindly change H and W based on the experimental data that you are using. H=360, W=720 is done probably because we were running some other experiments where the equirectangular frame size was 360x720. I hope this answers the question.

@liuyingjie0329
Copy link

Kindly change H and W based on the experimental data that you are using. H=360, W=720 is done probably because we were running some other experiments where the equirectangular frame size was 360x720. I hope this answers the question.

I understand what you mean. Thank you very much!

@liuyingjie0329
Copy link

Kindly change H and W based on the experimental data that you are using. H=360, W=720 is done probably because we were running some other experiments where the equirectangular frame size was 360x720. I hope this answers the question.

Sorry, but I still have a question about the program. What is the relationship between the height, weight, view_height and view_weight in meta.json? Also, H and W.

@bbgua85776540
Copy link

Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.

2021年8月22日 上午3:55,Sarthak Chakraborty @.***> 写道: I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q.

Hello, I have a similar question with you. I change the H and W but still get different results in dataset2. And could you please tell me where you download the dataset1? The link is under 404 error. Did you do anything else to get similar results?

  1. 你说的问题是H和W怎么设置吗?dataset2我好像没测试,你是怎么设置的H,W?结果差了多少?
  2. dataset1 can be downloaded from [https://dl.acm.org/do/10.1145/3193701/abs/ ](https://dl.acm.org/do/10.1145/3193701/abs/)

(https://dl.acm.org/do/10.1145/3193701/abs/)这个链接里面的数据集压缩文件损坏了,只有部分用户的轨迹数据,能发一份完整版的我吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants