- cross-posted to:
- opensource@programming.dev
- cross-posted to:
- opensource@programming.dev
Tidy- Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine
Features
- Text-to-Image search: Find photos using natural language descriptions.
- Image-to-Image search: Discover visually similar images.
- Automatic indexing: New photos are automatically added to the index.
- Fast and efficient: Get search results quickly.
- Privacy-focused: Your photos never leave your device.
- No internet required: Works perfectly offline.
- Powered by OpenAI’s CLIP model: Uses advanced AI for accurate results.
You must log in or register to comment.
Does anybody know which CLIP model does it use?
That sounds fun! Let’s see what it has to say about tge 19.000 photos on mt phone! 😸
Just tried it out, works great! I was able to do some basic searches and most results mad sense
Two cool features would be:
- ability to delete photos. I could see searching and deleting a good workflow
- if possible, see what tags the images have to see how CLIP output works
This looks great! I’d love to have this for all my nextcloud pictures!