I appreciate that an AI oriented browser might be able to do the lot. But I'm extra hesitant to trust the whole process to a newly developed program with unknown (possible) pitfalls.
-Dormouse
Yesterday, as is my wont on a Sunday, I watched a selection of antiquarian ambles
https://one of which was interrupted by a rant about AI. It's getting everywhere.

The story was that a place had been ascribed a name in the nineteenth century (which was essentially made up) but since disproved by academics but has recently been resuscitated on the internet courtesy of AI "reading" old books and being unable to tell true from false.
Highlighting my concerns about its use in family history where everything depends on double and triple checking and weighing probabilities. Concerns only increased by sites AI-driven suggestions - over the weekend, I was directed to a newspaper cutting supposedly possibly about the death of an ancestor; interesting but this death was years before the many records showing him alive.
And, more egregiously, there was
this AI said something had been done, when it hadn't. Challenged, it produced a transcript. Further challenged, it denied making it up. Before eventually confessing and promising never to do it again. Crocodile tears, like a child wanting to avoid heavier punishment but not really understanding they have done wrong.
I assume it's programmed to believe that what it has said is true. And, if it's true, then there must be a source. And, if all the sources are very similar, then this particular source must be like this. I see no sign that the programmers have ever read anything about the philosophy of science (tbf most scientists show no sign of it either).