-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to move depths to the same gpu as decoded_imgs? #5611
Comments
Hi @kristinat8, Thank you for reaching out. Can you try:
|
If I want to process the depth in the pipeline in CHW format is there any way I can do it?,Thanks! |
Hi @kristinat8, I'm not sure if I understand your ask.
|
Hello, when I use distributed computing, I encountered a decoding error with the following code. Could you please tell me what might be the reason? @pipeline_def(batch_size=80, num_threads=16, enable_conditionals=True)
def VideoPipe(total_picture, file_list, dfile_list, local_crops_number, frame_per_clip):
rank = utils.get_rank()
world_size = utils.get_world_size()
input = fn.readers.file(file_list=file_list, random_shuffle=False, num_shards=world_size, shard_id=rank)
depth = fn.readers.file(file_list=dfile_list, random_shuffle=False, num_shards=world_size, shard_id=rank)
shapes = fn.peek_image_shape(input[0])
# crop_anchor, crop_shape= fn.random_crop_generator(shapes, random_area=[0.2, 1.0])
# crop_anchor = fn.permute_batch(crop_anchor, indices=batch_size * [0])
# crop_shape = fn.permute_batch(crop_shape , indices=batch_size * [0])
# print(crop_shape)
# init_crop
num_clips = total_picture // frame_per_clip
indices = np.concatenate([i * np.ones(frame_per_clip, dtype=int) for i in range(num_clips)])
indices = indices.tolist()
crop_anchor, crop_shape= fn.random_crop_generator(shapes, random_area=[0.2, 1.0])
crop_anchor = fn.permute_batch(crop_anchor, indices=indices)
crop_shape = fn.permute_batch(crop_shape , indices=indices)
images = fn.decoders.image_slice(input[0], crop_anchor, crop_shape, device="mixed", axis_names="HW").gpu() #HWC
depths = fn.decoders.image_slice(depth[0], crop_anchor, crop_shape, device="mixed", axis_names="HW") #HWC
images = fn.resize(images, resize_x=300, resize_y=300, device="gpu") #HWC
depths = fn.resize(depths, resize_x=300, resize_y=300, device="gpu") #HWC
frames = fn.transpose(images, perm=[2, 0, 1]) #CHW
depths = fn.transpose(depths, perm=[2, 0, 1]) #CHW
# frames = fn.normalize(frames, mean=0.0, stddev=255.0, device="gpu") #CHW
global1, global1_depth= process_global(frames, depths, indices)
# global1_combined = process_global(frames, depths, indices)
locals, local_depths = map(list, zip(*[process_local(frames, depths) for _ in range(local_crops_number)]))
return global1, global1_depth, *locals, *local_depths ERROR] [nvjpeg_cudadecoder] Could not decode jpeg code stream - nvjpeg error #4 (Jpeg not supported) when running nvjpegJpegStreamParse(handle, static_cast<const unsigned char*>(stream_ctx->encoded_streamdata), stream_ctx->encoded_stream_datasize, false, false, p.parsestate.nvjpegstream) at /home/jenkins/agent/workspace/nvimagecodec/helpers/release_v0.3.0/Release_11/build/extensions/nvjpeg/cuda_decoder.cpp:472 |
|
Hi @kristinat8, Can you provide a full error log as it should print the name of the image that caused this error? It seems that one of the images in your data set is corrupted. Can you check if you can open it in any image viewer? If so then it would help us a lot if you could provide the image for our examination, maybe there is a gap in our decoding support. |
Describe the question.
Hi, I want to load images and depth via external_source,images are in jpg format and depth is in npy format, I need the depth and images to match. In pipe, images are on gpu, but depths are on cpu, for subsequent processing I need to do the same cropping, and stitching. The following problem occurs in fn.cat:
Traceback (most recent call last):
File "mashangshan.py", line 70, in
outputs = pipe.run()
File "/home/u202320081200023/miniconda3/envs/dora/lib/python3.8/site-packages/nvidia/dali/pipeline.py", line 1328, in run
return self.outputs()
File "/home/u202320081200023/miniconda3/envs/dora/lib/python3.8/site-packages/nvidia/dali/pipeline.py", line 1166, in outputs
return self._outputs()
File "/home/u202320081200023/miniconda3/envs/dora/lib/python3.8/site-packages/nvidia/dali/pipeline.py", line 1251, in _outputs
return self._pipe.Outputs()
RuntimeError: Critical error in pipeline:
Error in GPU operator
nvidia.dali.fn.cat
,which was used in the pipeline definition with the following traceback:
File "mashangshan.py", line 62, in combined_pipeline
combined = fn.cat(decoded_imgs, depths, axis=2)
encountered:
Assert on "inputs_[idx].device == StorageDevice::GPU" failed: The input 1 is not on the requested device (GPU).
C++ context: [/opt/dali/dali/pipeline/workspace/workspace.h:637]
Current pipeline object is no longer valid.
Check for duplicates
The text was updated successfully, but these errors were encountered: