I’ve been experimenting with Local LLMs. The new hotness is
llamafile which is
a very neat new way to package up models where they run almost totally isolated
from dependencies. In theory you could stash one of these on a USB stick and
you’d have a brain in a box to survive the apocalypse.
This talk is from a few years back, but the llamafile project made me think of it.
This is a great demo of building an application that knows how to deploy it self.
I think it’s interesting that people are trying to Turing Test our most modern LLMs considering that the models themselves are fine tuned to avoid passing themselves off as anything but a helpful assistant.
I’m wondering though what would happen if GPT4 was given the task to evaluate other LLMs if it would be able to tell the difference between people and other AIs…
“An idea-to-video platform”. I have no idea if this is the real deal, but it’s a killer demo video.
Of course the gap between a demo and a real product is a pretty big one.