Yesterday I came across two fun and interesting uses of large language models, in quick succession.
First, I saw a post on Mastodon commenting about how “brutal” a web app called GitHub Roaster is, in analysing a user’s profile and repository history. That’s a very accurate assessment. The app uses OpenAI’s GPT-4o to create “a short and harsh roasting” for a given profile. The result for my profile was sufficiently uncomfortable to read, that I swiftly moved on!
Very soon afterwards, my friend Tim Kellogg replied to my boost of the original Mastodon post to point out another app, which takes a different angle. Praise my GitHub Profile has a fantastic strapline:
Instead of trying to tear each other down with AI, why not use it to help lift others up?
I love this approach!

(from a technical perspective, I noted that this app uses the prompt “give uplifting words of encouragement” with LLaMa 3.1 70b, to create more positive output)
If we’re going to use these sorts of tools and make these kinds of apps – let’s do so in a positive manner. Notwithstanding the very real issues with the overuse of resources, and the moral and legal debates around how the models have been trained – both of which I have huge concerns about – I strongly believe that technology has the capacity to have a positive impact on society when used well, ethically, and thoughtfully. Like everything else though, it is up to us to make the best, and most positive use of what we have access to, what we create, and what we leave behind in the world. It is our individual, and our collective, responsibility.
Thank you, Xe, for being thoughtful about this. You’re inspiring!