
I have been playing around with grok a bit for some projects.
ffmpeg is a video file editor which can do a lot of stuff. Cut clips from a larger video, hook two clips together, add some text, crop a video, covert file formats, etc. This is the program you want to use if you want to take short clips from longer videos and turn them into sharable, small files like a gif or webm. That is, make memes. Like this typical 65 year old feminist:
I’m an amateur though so I have to go through tutorials to get what I want. Grok was fair at this. 1 hit, 1 slight miss, but helpful both times. I used actual pages before trying grok; so not many data points. AI definitely takes summaries actual people have made and regurgitates it nearly word for word, as it spit out the same material I saw on some guys site previously. People who say AI violates copyright are serious. That is basically what it does. No hits or clicks for you because AI copies your work without attribution. Well, sometimes it cites pages and other times it doesn’t. Either way, most people aren’t going to click on the original creators site if they already have the answer in the AI box. Assuming a citation is even used.
I find it interesting that LLM introduction coincides with internet search engines like google being demonstrably much worse than they were 20 years ago. Compare modern search to AI and AI is much better. It will quickly find the at least helpful, if not perfect, answers to your questions. Usually. They are still trying to lobotomize them so they won’t say bad things about black people. But what if we compared AI results to google in 2006? Assuming the AI was using 2006 data. I think it would be much closer to a draw. With good, neutral search engines, individuals could potentially create material with value, build a following, make a living off of it, and even change the cultural and political course of countries. Or more could than are currently able to. With bad search engines supplanted by relatively better AI, you get the product of a good search engine with less risk random kulaks can make it big outside narrative gatekeepers. Quite the coincidence that the way things are working out seem to be in keeping with how centralized narrative control would like it to go.

Future predictions: AI takes the place of most search since it is currently more honest. As soon as there is mass adoption, search gets phased out. AI gets the same treatment as the old search and no longer delivers honest information from other people. Maybe even independent webpages get mass delisted and never linked to. More so than currently. When narrative questioning queries are made, the AI immediately engages in arguing against counter-narrative ideas in its answers. No answer given without narrative reinforcement. Personalized gaslighting for every individual who might accidentally leave the reservation. I hope I am wrong. If this isn’t stopped before its too late it will be well and truly over for freedom.
Another project I am working on, I remember a specific clip from one of James Burke’s series. That’s like ~40 hours of content and I don’t want to rewatch everything to find it. What I am looking for is something where Burke is talking about some inventions, like gunpowder and maybe a type of printing press or a water wheel, that was invented in China but then they never advanced it much or creatively figured out other ways the technology could be used. In short, there was an idea that experiments aren’t worth it because studying a small piece of the universe can’t tell you about the whole so Asian technological advances were rather fitful and without direction. This was attributed to local religions/philosophy like Confucianism. This was contrasted to Western methods which said you could study a piece to learn about the whole. There may have been some attribution to Christianity for this. This was a fairly short section of a whole episode, I believe.

I asked grok about this and it gave some episode suggestions and I have gone through 3. I am on the 4th suggestion. Obviously the first 3 weren’t what I was looking for. If the 4th is a bust too, I will just move on. I have spent way more time trying to find this 30 second clip than is in any way warranted. Is my memory wrong or is Grok just not able to find it? To be fair to grok, it did state that what I was asking for didn’t exactly match what it had going on in its black box. I really feel like this short section exists somewhen in that 40 hours. Or my memory is faulty. I’ll only know for sure if that section is found. If anyone knows, please leave a comment.
Overall what grok did generate was interesting and helpful for first passes on trying to research things. How impressive what Grok spit out is dependent on whether or not is was original or just copied off of someone’s website. When it did talk about specific episodes it usually did have accurate summaries of those episodes and there were elements of what I was looking for. That is probably true of all James Burke work though. The idea of understanding changing when perceptions change is in every episode.
I will probably use Grok more in the future as a research aide though. Its a lot better than google after all…
A bit of a fan of the SCP Foundation here, and it’s occurred to me that as an information source that’s meant to be fiction but often doesn’t read like it, there’s a fine line for AIs to keep from crossing.
So with that in mind: what do you figure would happen to an AI if one happened upon a book that was chock full of “AI cognitive hazards” within what would otherwise look like stylised human-orientated text?
As in stuff that’s perfectly normal (even if a bit odd) for humans to read but absolutely lethal to the integrity of AI LLMs and other similar constructs.
And as there is as yet no provable sentience behind any of the AIs presently available, or in short that “there is no there there”, any government or corporate legal team would have a very tough time of getting charges of “AI murder” to stick, and the idea of “malicious mischief” doesn’t really hold water as they are the ones instigating a defensive response.
So it’s like this: in a world with an over-arching control freak vibe to it in the form of Mighty Morphing Power AIs destined to portend Doom Over Us All(TM), someone has to step up and become the First Weapons Shop of Isher.
BTW, ever read Max Barry’s book “Lexicon”?
Its interesting that you asked that, and I tried to see if I could get grok to look at my book and make comments. Errors popped up because it was too long so I asked it a few questions. More or less, it seems like it doesn’t learn or retain any information not fed to it by the people creating it. That is, it seems like its interactions with new information from random joe blow queries is temporary in ram and once the interaction is over with it gets purged and can’t get referred to again. It temporarily holds the query and references the permanent dataset to answer it. additions to permanent memory is seemingly tightly controlled by the creators. I assume they add new things all the time, but I don’t know. If so, I suspect they already withhold some data that isn’t well liked. They could in theory try to prevent what you are discussing.
My impression is that there is no there, there, like you said. I am not worried about the current iterations getting wonky over data in specific queries. Or even from anything the curators could feed it. Vox had an article on ai intelligence which you might find interesting
https://voxday.net/2025/02/24/my-new-friend/
Grok is better than other LLMs, but as you said LLMs regurgitate things, they synthesize things, they can’t generate anything, so they are not AI; it has been shown at least in two papers that they cannot reason, and as you implied it violates people’s “rights”, but the system was designed to transfer ownership from creator to reseller who then make copies and sell them, it is theft, but the logic that evidences this is always cryptic and people are ignorant, This “AI” is a new method but ultimately the same thing.
Ya, I think ultimately its going to be used to screw creatives out of payment for their work and make sure no unapproved narratives make it into the wild ever again. These two goals overlap a lot I am sure. Copyright law is broken and this seems like it is a swing in the complete opposite direction.
https://voxday.net/2025/02/23/copyright-must-reform/
In practice I am sure consolidated companies will have their ip protected while anything not worth billions already will get screwed.
[…] Our own Atavisionary takes a look at his own recent brushes with AI, and what it may mean for its fu… […]
I tried asking Grok to listen to several pieces of music and tell me SPECIFIC things about it (where is the guitar in the stereo spectrum, what is the lead instrument doing the melodies, can it isolate the bass guitar and tell me what it’s doing, what is the time signature, what is the key signature, etc.).
It was absolutely terrible. It regurgitated typical slop from other websites where it was OBVIOUSLY trying to use buzzwords to answer the question. It’s like when a child guesses at simple math problems.
It kept trying to explain that it doesn’t “hear” things the way a human “hears” things. Okay, sure, BUT, how come in the music world (in, Logic Pro X on MacOS for example), we have all KINDS of semi-AI plugins that analyze the music for things like EQ, stereo spectrum things, even actual NOTES, but somehow AI can’t do all those things we ALREADY have. Weird innit?
So my conclusion: CURRENTLY, Grok cannot analyze music in a human way or at least in a way that would help a human UNDERSTAND things about the music. Can’t do it. Almost worthless at this point, but perhaps, by being trained by a human as to what certain things in the music MEAN, perhaps AI could get better at this.
I ALSO have a competing theory that it’s all bunk, Grok is merely Elon’s method to help explain things to the world the way HE would explain things (and thus quite susceptible to bias and narrative protection) and it’s either most likely for evil purposes, OR Elon and Orange Man really ARE the white hats and this (Grok/AI) is their method for unveiling really dark truths to the world.
If an answer isn’t something easily amenable to crowdsourcing type correctness, then LLMs are probably going to have a hard time with it. Not many people like James Burkes old shows so there isn’t going to be large datasets with well tested consensus.
Maybe the same thing is true for music? I don’t really know to be honest. Sound is quite amenable to very precise recording and detailing though. Maybe there is just a problem currently with associating human terms with specific waveforms at the moment. I imagine if that was systematically done, AI could be made a lot better at music analysis. In a blind machine sort of way, it should be accurate though.
I think Elon is mostly just an actor playing a role.