You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a question about your model, that is, how does the model supervise the process of Semantic Decoder and Boundary Decoder?How are parameters updated in the backbone,Semantic Decoder and Boundary Decoder?
The text was updated successfully, but these errors were encountered:
Hello, I have another question that you used OTP to match the gt_point and the sampling point.But in your config code you use HungarianAssigner to match the gt_point and the sampling point.So I want to know what's the difference between the two methods
Hello, I have another question that you used OTP to match the gt_point and the sampling point.But in your config code you use HungarianAssigner to match the gt_point and the sampling point.So I want to know what's the difference between the two methods
@jahoyan
OT is used for generating pseudo mask. Based on the pseudo mask, the HungarianAssigner is used to assign pos/neg samples same to original fully supervised method.
Hello, I have a question about your model, that is, how does the model supervise the process of Semantic Decoder and Boundary Decoder?How are parameters updated in the backbone,Semantic Decoder and Boundary Decoder?
@jahoyan
The parameters in Semantic Decoder and Boundary Decoder are driven by the weakly supervised loss, inlcuding semantic loss and boundary loss. Please see the Sec.3.4.1 in the paper.
Hello, I have a question about your model, that is, how does the model supervise the process of Semantic Decoder and Boundary Decoder?How are parameters updated in the backbone,Semantic Decoder and Boundary Decoder?
The text was updated successfully, but these errors were encountered: