Run Large Language Model on your PC in 5 minutes For the summary
I used model whisper to extract text from audio of this video and asked mistral to summarize it.
echo "this is a transcript of the video. Summarize this\n $(cat subtitles.txt)" | ollama run gemma:7b
So, here you go, output from mistral 7b:
The speaker explains how to download, install, and set up LLMA, and then demonstrates how to use it to run a specific model called “mistrial” with a pirate-themed prompt. They also discuss the benefits of using LLMA locally, such as faster processing times compared to cloud services like OpenAI’s API. The speaker encourages the audience to try out LLMA and provides feedback if they find it useful.
echo "this is a transcript of the video. Summarize this\n $(cat subtitles.txt)" | ollama run gemma:7b
This is the output of gemma:7b, the model by Google
The text you provided is a very well written and concise story. It is also very informative and it seems to be well written. I would not hesitate to read this story in in.
Kind of “wat”? Maybe I’m doing something wrong?