Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement new Rock Paper Scissors recipe #73

Open
modcarroll opened this issue Sep 28, 2022 · 5 comments
Open

Implement new Rock Paper Scissors recipe #73

modcarroll opened this issue Sep 28, 2022 · 5 comments
Labels

Comments

@modcarroll
Copy link

Watson Visual Recognition is deprecated, so we should find a (preferably) open source replacement

@jweisz jweisz added the refresh label Jan 13, 2023
@jweisz
Copy link
Collaborator

jweisz commented Jan 13, 2023

Cesar will own this, he experimented with OpenCV, will train an object detection model for the RPi

@jweisz jweisz changed the title Replace Visual Recognition Replace Visual Recognition with on-device solution Jan 13, 2023
@jweisz
Copy link
Collaborator

jweisz commented Jan 27, 2023

@cmaciel is getting 1.1-1.5 FPS with a pre-trained model (MobileNet, has 90 classes of objects) on a RPi 3. He will re-train the model to focus only on objects found inside a house to get better results.

@jweisz
Copy link
Collaborator

jweisz commented Mar 24, 2023

Oops, the model was trained but the data labels were missing! :D

@jweisz
Copy link
Collaborator

jweisz commented May 5, 2023

Need to create a list of common household items to train a new model. Maybe focus the model on rock, paper, scissors? Maybe tjbotlib has generic functions for loading local models and running inference, and then we create a separate recipe that uses that API to load a rock/paper/scissors model locally in order to play that game.

@jweisz
Copy link
Collaborator

jweisz commented Sep 22, 2023

I propose a two-part solution for the elimination of Watson Visual Recognition:

  1. Remove the see() functionality from tjbotlib and add a look() function which just takes a picture and returns the filename in the filesystem (e.g. somewhere in /tmp). Which is to say, I think this is what takePhoto() already does, so maybe we just rename that function.
  2. Add a new recipe to tjbot for rock-paper-scissors that uses @cmaciel 's on-device vision model. But we likely don't want to bundle the whole model inside the tjbot repo because it's likely big (right?) so maybe the recipe downloads the model from somewhere on first run (e.g. test if it's in the filesystem, if not, then download it). We'd need a permalink for hosting the model somewhere.

@cmaciel if this plan makes sense to you, I'd like to close this card and open two new cards to cover 👆 . Thanks!

@jweisz jweisz changed the title Replace Visual Recognition with on-device solution Implement new Rock Paper Scissors recipe Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants