A use case of sentiment analysis LLMs for tracking sentiments in a personal knowledge base like Logseq.
I came upon this video by Matt D'Avella a while back about systematically tracking your mood and how that helps you cultivate awareness. I'm never quite the person to be able to track--or for that matter even determine--my mood in a given day, but I thought since I have ~300,000 words in my personal knowledge base in Logseq, a computer might do the job for me!
I did a similar project with the lexicon-based Vader module but the accuracy was too low. I'm using the RoBERTa model trained on the Go Emotions dataset which still is limited not least by the labellers who thought "LETS FUCKING GOOOOO" meant anger. By some estimation 30% of the Go Emotions dataset is blatantly wrong. Recognizing the limitations, this is probably still the best and most cost-efficient solution we have.
An interesting example is a short story Girl by Jamaica Kincaid which the vedar model rated as 0.9931 but the RoBERTa model rated as -0.368, after normalizing the positive and negative scores. Given that this is a fragmented second-person narrative the model needed to grasp more subtle clues about the text to conclude--rightly--that the key emotion is more negative than positive.
{'label': 'neutral', 'score': 0.7813245058059692},
{'label': 'disapproval', 'score': 0.12890969216823578},
{'label': 'annoyance', 'score': 0.0523228719830513},