-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] how to use new ipc in windows #9322
Comments
By default the IPC still serializes data (even if it's still faster doing that). To read buffers (and send a response containing a large data) you must use #[tauri::command]
pub(crate) async fn append_chunk_to_file(
window: tauri::Window,
request: tauri::ipc::Request<'_>,
) -> crate::Result<tauri::ipc::Response> {
if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
let path = PathBuf::from(request.headers().get("file").unwrap().to_str().unwrap());
let end = request.headers().get("end").unwrap() == "true";
Ok(tauri::ipc::Response::new(data.clone()))
} else {
todo!()
}
} invoke("append_chunk_to_file", new Uint8Array([]), {
headers: {
path: "/path/to/file",
end: "false",
},
}); |
We still need to document this on the official documentation website. |
Thank you very much for your reply. I have tested the new code and the speed has greatly improved. Now it takes about 500 milliseconds to transfer a 23MB file (previously 8 seconds)(Most of the time is still spent between JavaScript and Rust), but it seems that there is still a significant gap compared to" 150MB file now takes less than 60ms to resolve". #[tauri::command]
async fn new_append_chunk_to_file(
request: tauri::ipc::Request<'_>,
) -> Result<(), String> {
let current_time = Local::now().time();
println!("enter rust time: {}", current_time);
if let tauri::ipc::InvokeBody::Raw(data) = request.body() {
let path = PathBuf::from(request.headers().get("path").unwrap().to_str().unwrap());
// let end = request.headers().get("end").unwrap() == "true";
let mut file = OpenOptions::new()
.create(true)
.append(true)
.open(&path)
.map_err(|e| e.to_string())
.unwrap();
file.write_all(data).map_err(|e| e.to_string()).unwrap();
let current_time = Local::now().time();
println!("return time: {}", current_time);
Ok(())
} else {
todo!()
}
} this is js code: startTime = new Date().getTime();
console.log(`js start time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
await invoke("new_append_chunk_to_file",content1 , {
headers: {
path: url,
end: "false",
},
});
endTime = new Date().getTime();
console.log(`js end time${new Date().getSeconds()}:${new Date().getMilliseconds()}`)
console.log(`write image took ${endTime - startTime} milliseconds.`); here is running results:
|
The claim of a 150mb file resolving in 60ms is only from afaik returning it, most likely from Rust->JS, which only takes into account the IPC time. Your command measure not only IPC time but also the time it takes to open a file and write all the data to that file, so it will definitely be longer than the 60ms claim, because it is that +IO (and IO afaik isn't all that fast) A command that should run in 60ms according to the claim // We never actually error, but w/e
#[tauri::command]
async fn give_me_150mb_file() -> Result<Vec<u8>, String> {
const THE_150MB_FILE: &[u8] = include_bytes!("some_150mb.dump");
Ok(THE_150MB_FILE.to_vec())
} |
Hey all, This is my ts code atp: await ffmpeg.exec(ffmpeg_cmd);
const data = (await ffmpeg.readFile(output)) as any;
const uint8Data = new Uint8Array(data) as any;
await fs.writeBinaryFile(outputPath, uint8Data); Thank you in advance. |
Yes. The changes are too large to backport them to v1. |
@FabianLars, how do I save my file with maximum speed? |
|
@Xiaobaishushu25 Excuse me, I also tried to use binary forwarding of v2, but the speed did not reach the speed of 150mb 60ms. After testing, my particles transmitted about 30mb per second. Have your particles reached the official speed? |
In node you need to be streaming vs writeBinary, duplex might work https://blog.dennisokeeffe.com/blog/2024-07-11-duplex-streams-in-nodejs https://medium.com/deno-the-complete-reference/10-use-cases-of-streams-in-node-js-273f02011f60#a9c1 |
i update to v2,but the file transder between js and rust is also slow in Windows OS, Here is my
Cargo. toml
:this is my rust code:
i use it in js:
It took 8 seconds to transfer a 23MB image using the above code(most time take in se and de),but in #7170 @lucasfernog say A command returning a 150MB file now takes less than 60ms to resolve. Previously: almost 50 seconds.
Where did I go wrong? Do you have the correct example code?
The text was updated successfully, but these errors were encountered: