This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
I've developed a tool that serves as a real-time overlay for detecting logical fallacies in political debates. It uses PyQt5 for the UI and Mistral LLM through the API of the text-generation-webui for both audio transcription and logical analysis. The overlay is transparent, making it easy to keep it on top of other windows like a live stream or video. I was able to run both Whisper with the Mistral-7B-OpenOrca-GPTQ locally on a single RTX 3090. VRAM usage 15GB.
Key Features:
- Real-time audio transcription captures what's being said in debates.
- Instant fallacy detection using a Language Language Model (LLM).
- The overlay is transparent, draggable, and stays on top for multitasking.
- Option to toggle between local LLM and ChatGPT for logical analysis.
This tool aims to make it easier to spot logical inconsistencies in real-time during political debates, thereby fostering a more informed electorate.
Check it out on (GitHub)[https://github.com/latent-variable/Real_time_fallacy_detection] and I'd love to hear your thoughts!
Feel free to adapt this template to better suit your project's specifics.
Edit: typo
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/LocalLLaMA/...