Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] how to use new ipc in windows #9322

Closed
Xiaobaishushu25 opened this issue Apr 1, 2024 · 10 comments
Closed

[docs] how to use new ipc in windows #9322

Xiaobaishushu25 opened this issue Apr 1, 2024 · 10 comments
Labels
type: documentation Need to update the API documentation

Comments

@Xiaobaishushu25
Copy link

i update to v2,but the file transder between js and rust is also slow in Windows OS, Here is my Cargo. toml

[build-dependencies]
tauri-build = { version = "2.0.0-beta", features = [] }

[dependencies]
tauri = { version = "2.0.0-beta.13", features = ["custom-protocol"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
chrono = "0.4.31"
tokio = { version = "1.20", features = ["macros", "rt-multi-thread"] }
tauri-plugin-dialog = "2.0.0-beta.3"
tauri-plugin-http = "2.0.0-beta.3"
tauri-plugin-fs = "2.0.0-beta.3"


[features]
# This feature is used for production builds or when a dev server is not specified, DO NOT REMOVE!!
custom-protocol = ["tauri/custom-protocol"]

this is my rust code:

#[tauri::command]
async fn append_chunk_to_file(
    window: Window,
    path: String,
    chunk: Vec<u8>,
    end: bool,
) -> Result<(), String> {
    let current_time = Local::now().time();
    println!("enter rust time: {}", current_time);
    println!("start{:?}", Instant::now()); //收到函数时间Instant { t: 644913.1384745s }
    tokio::spawn(async move {
        let mut file = OpenOptions::new()
            .create(true)
            .append(true)
            .open(&path)
            .map_err(|e| e.to_string())
            .unwrap();
        file.write_all(&chunk).map_err(|e| e.to_string()).unwrap();
        if end {
            window.emit("insert", Payload { message: path }).unwrap();
        }
    });
    let instant = Instant::now();
    println!("end{:?}", instant); //结束函数时间Instant { t: 644913.1386845s }
    let current_time = Local::now().time();
    println!("return time: {}", current_time);
    Ok(())
}

i use it in js:

import {mkdir, readFile, writeFile, BaseDirectory} from "@tauri-apps/plugin-fs";
import {convertFileSrc, invoke} from "@tauri-apps/api/core";
const content = await file.arrayBuffer()
let content1 = new Uint8Array(content);
await invoke("append_chunk_to_file", {path: url, chunk: chunk1,end:true})

It took 8 seconds to transfer a 23MB image using the above code(most time take in se and de),but in #7170 @lucasfernog say A command returning a 150MB file now takes less than 60ms to resolve. Previously: almost 50 seconds.
Where did I go wrong? Do you have the correct example code?

@Xiaobaishushu25 Xiaobaishushu25 added the type: documentation Need to update the API documentation label Apr 1, 2024
@lucasfernog
Copy link
Member

By default the IPC still serializes data (even if it's still faster doing that). To read buffers (and send a response containing a large data) you must use tauri::ipc::Request and tauri::ipc::Response, sending a raw binary array on the JS side:

#[tauri::command]
pub(crate) async fn append_chunk_to_file(
    window: tauri::Window,
    request: tauri::ipc::Request<'_>,
) -> crate::Result<tauri::ipc::Response> {
    if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
        let path = PathBuf::from(request.headers().get("file").unwrap().to_str().unwrap());
        let end = request.headers().get("end").unwrap() == "true";
        Ok(tauri::ipc::Response::new(data.clone()))
    } else {
        todo!()
    }
}
invoke("append_chunk_to_file", new Uint8Array([]), {
  headers: {
    path: "/path/to/file",
    end: "false",
  },
});

@lucasfernog
Copy link
Member

We still need to document this on the official documentation website.

@Xiaobaishushu25
Copy link
Author

By default the IPC still serializes data (even if it's still faster doing that). To read buffers (and send a response containing a large data) you must use tauri::ipc::Request and tauri::ipc::Response, sending a raw binary array on the JS side:

#[tauri::command]
pub(crate) async fn append_chunk_to_file(
    window: tauri::Window,
    request: tauri::ipc::Request<'_>,
) -> crate::Result<tauri::ipc::Response> {
    if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
        let path = PathBuf::from(request.headers().get("file").unwrap().to_str().unwrap());
        let end = request.headers().get("end").unwrap() == "true";
        Ok(tauri::ipc::Response::new(data.clone()))
    } else {
        todo!()
    }
}
invoke("append_chunk_to_file", new Uint8Array([]), {
  headers: {
    path: "/path/to/file",
    end: "false",
  },
});

Thank you very much for your reply. I have tested the new code and the speed has greatly improved. Now it takes about 500 milliseconds to transfer a 23MB file (previously 8 seconds)(Most of the time is still spent between JavaScript and Rust), but it seems that there is still a significant gap compared to" 150MB file now takes less than 60ms to resolve".
this is new rust code:

#[tauri::command]
async fn new_append_chunk_to_file(
    request: tauri::ipc::Request<'_>,
) -> Result<(), String>  {
    let current_time = Local::now().time();
    println!("enter rust time: {}", current_time);
    if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
        let path = PathBuf::from(request.headers().get("path").unwrap().to_str().unwrap());
        // let end = request.headers().get("end").unwrap() == "true";
        let mut file = OpenOptions::new()
            .create(true)
            .append(true)
            .open(&path)
            .map_err(|e| e.to_string())
            .unwrap();
        file.write_all(data).map_err(|e| e.to_string()).unwrap();
        let current_time = Local::now().time();
        println!("return time: {}", current_time);
        Ok(())
    } else {
        todo!()
    }
}

this is js code:

startTime = new Date().getTime();
console.log(`js start time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
    await invoke("new_append_chunk_to_file",content1 , {
      headers: {
        path: url,
        end: "false",
      },
    });
endTime = new Date().getTime();
console.log(`js end time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
console.log(`write image took ${endTime - startTime} milliseconds.`);

here is running results:

js start time14:258
Edit.vue:180 js end time14:736
Edit.vue:183 write image took 478 milliseconds.
enter rust time: 22:30:14.671063800
return time: 22:30:14.728613500


image
image
Is this speed normal?

@Jengamon
Copy link

Jengamon commented Apr 4, 2024

By default the IPC still serializes data (even if it's still faster doing that). To read buffers (and send a response containing a large data) you must use tauri::ipc::Request and tauri::ipc::Response, sending a raw binary array on the JS side:

#[tauri::command]
pub(crate) async fn append_chunk_to_file(
    window: tauri::Window,
    request: tauri::ipc::Request<'_>,
) -> crate::Result<tauri::ipc::Response> {
    if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
        let path = PathBuf::from(request.headers().get("file").unwrap().to_str().unwrap());
        let end = request.headers().get("end").unwrap() == "true";
        Ok(tauri::ipc::Response::new(data.clone()))
    } else {
        todo!()
    }
}
invoke("append_chunk_to_file", new Uint8Array([]), {
  headers: {
    path: "/path/to/file",
    end: "false",
  },
});

Thank you very much for your reply. I have tested the new code and the speed has greatly improved. Now it takes about 500 milliseconds to transfer a 23MB file (previously 8 seconds)(Most of the time is still spent between JavaScript and Rust), but it seems that there is still a significant gap compared to" 150MB file now takes less than 60ms to resolve". this is new rust code:

#[tauri::command]
async fn new_append_chunk_to_file(
    request: tauri::ipc::Request<'_>,
) -> Result<(), String>  {
    let current_time = Local::now().time();
    println!("enter rust time: {}", current_time);
    if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
        let path = PathBuf::from(request.headers().get("path").unwrap().to_str().unwrap());
        // let end = request.headers().get("end").unwrap() == "true";
        let mut file = OpenOptions::new()
            .create(true)
            .append(true)
            .open(&path)
            .map_err(|e| e.to_string())
            .unwrap();
        file.write_all(data).map_err(|e| e.to_string()).unwrap();
        let current_time = Local::now().time();
        println!("return time: {}", current_time);
        Ok(())
    } else {
        todo!()
    }
}

this is js code:

startTime = new Date().getTime();
console.log(`js start time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
    await invoke("new_append_chunk_to_file",content1 , {
      headers: {
        path: url,
        end: "false",
      },
    });
endTime = new Date().getTime();
console.log(`js end time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
console.log(`write image took ${endTime - startTime} milliseconds.`);

here is running results:

js start time14:258
Edit.vue:180 js end time14:736
Edit.vue:183 write image took 478 milliseconds.
enter rust time: 22:30:14.671063800
return time: 22:30:14.728613500

image image Is this speed normal?

The claim of a 150mb file resolving in 60ms is only from afaik returning it, most likely from Rust->JS, which only takes into account the IPC time. Your command measure not only IPC time but also the time it takes to open a file and write all the data to that file, so it will definitely be longer than the 60ms claim, because it is that +IO (and IO afaik isn't all that fast)

A command that should run in 60ms according to the claim

// We never actually error, but w/e
#[tauri::command]
async fn give_me_150mb_file() -> Result<Vec<u8>, String> {
    const THE_150MB_FILE: &[u8] = include_bytes!("some_150mb.dump");
    Ok(THE_150MB_FILE.to_vec())
}

@Pagla-Dasu
Copy link

Hey all,
A little context; I am converting files using ffmpeg and I need to save the converted file on the local machine. But, the speed is so slow and the app freezes when if the file is too big. I am not that familiar with rust but I want to learn what this IPC can do, and how I can save big files to my system fast. Any help is very much appreciated.

This is my ts code atp:

await ffmpeg.exec(ffmpeg_cmd);
const data = (await ffmpeg.readFile(output)) as any;
const uint8Data = new Uint8Array(data) as any;
await fs.writeBinaryFile(outputPath, uint8Data);

Thank you in advance.
(also I am using tauri-v1, do I need to have v2 beta to use this properly?)

@FabianLars
Copy link
Member

@Pagla-Dasu

(also I am using tauri-v1, do I need to have v2 beta to use this properly?)

Yes. The changes are too large to backport them to v1.

@Pagla-Dasu
Copy link

Yes. The changes are too large to backport them to v1.

@FabianLars, how do I save my file with maximum speed?

@MegaSa1nt
Copy link

how do I save my file with maximum speed?

#9322 (comment)

@weartist
Copy link

weartist commented Sep 11, 2024

@Xiaobaishushu25 Excuse me, I also tried to use binary forwarding of v2, but the speed did not reach the speed of 150mb 60ms. After testing, my particles transmitted about 30mb per second. Have your particles reached the official speed?

@gregpalaci
Copy link

gregpalaci commented Oct 1, 2024

Hey all, A little context; I am converting files using ffmpeg and I need to save the converted file on the local machine. But, the speed is so slow and the app freezes when if the file is too big. I am not that familiar with rust but I want to learn what this IPC can do, and how I can save big files to my system fast. Any help is very much appreciated.

This is my ts code atp:

await ffmpeg.exec(ffmpeg_cmd);
const data = (await ffmpeg.readFile(output)) as any;
const uint8Data = new Uint8Array(data) as any;
await fs.writeBinaryFile(outputPath, uint8Data);

Thank you in advance. (also I am using tauri-v1, do I need to have v2 beta to use this properly?)

In node you need to be streaming vs writeBinary, duplex might work https://blog.dennisokeeffe.com/blog/2024-07-11-duplex-streams-in-nodejs
it's slow with writeBinaryFile because it keeps everything in memory

https://medium.com/deno-the-complete-reference/10-use-cases-of-streams-in-node-js-273f02011f60#a9c1

https://blog.platformatic.dev/a-guide-to-reading-and-writing-nodejs-streams#heading-how-backpressure-works

https://www.npmjs.com/package/ffmpeg-stream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: documentation Need to update the API documentation
Projects
None yet
Development

No branches or pull requests

8 participants