If you're picking random sources for each, I agree. However, I know my subbed YTers, who are 95+% of what I watch—eg one guy I start his video and press the 'L' key 3 times to get me 30+ seconds in, at which time he's starting the useful info.
Some written reviews have a summary up top, which is often helpful in deciding to scan or leave. Some also have sub-headings in the article, which is great for getting the bits of interest and skipping the rest.
Have you tried the newish written summaries of videos produced by AI? I haven't, yet. Could be very helpful once the summaries become reliable.
Blind skipping through videos aside, I can read a review faster than I can watch one. But the major difference is that when you move around in a video, you are "skipping", while when you move around in a written review, you are "skimming". Makes all the difference in the world.
But, first admitting that I don't really care at all about YouTube game reviews, I did attempt to run an experiment using AI. I asked Google Bard and CoPilot to give me summaries of Angry Joe's Angry review of BG3. Anyone offering AI summaries of reviews is probably using CoPilot's API.
Google Bard failed entirely. It gave me a summary that initially sounded right, but it soon became clear it was inaccurate to the point that I didn't even have to check the actual video.
CoPilot, on the other hand, gave what appeared to be an accurate summary, so far as I could tell. My plan had been to check it against the video, but the video was mostly unwatchable for me, gratuitously wasting my time. From what I did gather from skipping around the video, CoPilot gave an accurate (and vastly shorter) summary.
The next video that came up in my search was gameranx's review. That video was only 16 minutes, so I decided to try that one, but when I asked CoPilot for a summary, the summary was clearly wrong, so i didn't end up bothering to watch the video. CoPilot was clearly combining the early access review with the final review.
So, as you said, accuracy is going to be a problem here, and based on my understanding of generative AI, it may never be fixed. At the very least, it will be a long journey. AI in its current form will be given lots of new bells and whistles to make it more useful in helping you to create content for one thing or another, but as far as being an accurate source of information goes, I don't see that being fixed without a lot of human intervention. Maybe that's the plan. I don't know.