topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday June 12, 2025, 8:45 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: [1] 2 3 4 5 6 ... 10next
1
The outcome of my Horse ride was that I switched to Vivaldi as my research browser. I've used it on and off for a long time, and I've tweaked my settings to give me the best approximation of trails. The times I switch away are usually because I hit problems with one or two sites or that I get irritated by some aspect of its default behaviour (and tweaking its settings is always a deep dive).

But given my frequent focus on note-taking, why wasn't its notes feature a stronger draw before?
A question I needed to ponder.

I think there were a number of factors.
  • Vivaldi is a browser with notes; it makes no attempt to incorporate PKM features - which means it can be a starting place for notes but not the final home. And there are many ways of taking notes on websites.
  • The organisation it does have is with folders, which don't map on to my preferred system.
  • There's no automatic link between the website and note; a link can be put into the note, but that's true for any system.
  • Vivaldi notes aren't especially good for annotating a website; my interest is often in making a few comments rather than developing a complete note.
  • The one program advantage relies on a screen large enough to accommodate both site and note.
  • I hadn't thought of the trails idea for deep, iterative research..
  • The advantage of taking the notes in Vivaldi, doesn't really extend beyond that particular methodology; it's perfectly okay for other notes, but no better than many other methods.
  • And it's browser dependent, which means there would have been a different workflow depending on which browser I was using at the time.


2
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Dormouse on June 09, 2025, 04:51 PM »
I appreciate that an AI oriented browser might be able to do the lot. But I'm extra hesitant to trust the whole process to a newly developed program with unknown (possible) pitfalls.

Yesterday, as is my wont on a Sunday, I watched a selection of antiquarian ambles https://one of which was interrupted by a rant about AI. It's getting everywhere.  :mad:

The story was that a place had been ascribed a name in the nineteenth century (which was essentially made up) but since disproved by academics but has recently been resuscitated on the internet courtesy of AI "reading" old books and being unable to tell true from false.

Highlighting my concerns about its use in family history where everything depends on double and triple checking and weighing probabilities. Concerns only increased by sites AI-driven suggestions - over the weekend, I was directed to a newspaper cutting supposedly possibly about the death of an ancestor; interesting but this death was years before the many records showing him alive.

And, more egregiously, there was this
AI said something had been done, when it hadn't. Challenged, it produced a transcript. Further challenged, it denied making it up. Before eventually confessing and promising never to do it again. Crocodile tears, like a child wanting to avoid heavier punishment but not really understanding they have done wrong.

I assume it's programmed to believe that what it has said is true. And, if it's true, then there must be a source. And, if all the sources are very similar, then this particular source must be like this. I see no sign that the programmers have ever read anything about the philosophy of science (tbf most scientists show no sign of it either).
3
FARR Plugins and Aliases / Re: FARR C# SDK and Documentation V2 (19/10/2008)
« Last post by AJolly on June 04, 2025, 07:33 PM »
@wraith808 I was looking through my files and found a later version of the SDK.  Its got a few more files in there but I did not do a full diff.  https://drive.google...Ebl/view?usp=sharing

I am curious to know what you are working on!
4
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Dormouse on June 03, 2025, 07:20 AM »
tbf the trails idea is good, and clinking a link as a subpage is fast and effective. Useful for some types of research.

In rests between thinking about, and setting up, my Vivaldi workflows - and reading about AI - I've look at various tree style browser solutions, and they appear to lack clarity of purpose and effective default behaviours. They don't seem adequate alternatives.

Having notes in the tree is also a very good idea - and not one I've come across  elsewhere. But the default behaviours are poor.

Which means that it's a system that might work well for those who want to keep all their info in the browser, but not for those that want to work with the notes.
And more a one-trick pony than a packhorse. And none of the expected comforts of a saddle horse either.

It's already apparent that for me Vivaldi is far faster and has fewer frictions for doing this type of research. This is partly because of the excellent inbuilt note options, and partly because of the availability of all my usual extensions. And a password manager that works. And it's not costing me an annual $60 $80.
5
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Dormouse on June 03, 2025, 06:48 AM »
It confirmed that the price was $60 ... I clicked the talk to a person button and emailed. There has been no response.

I finally received a response by email:
"I have fixed the bug on the homepage, it should now correctly display either $80 or £60, not “$60”. "
The € price has been upped to €70.

Presumably this will help address this issue mentioned on their marketing page:
"Conversion Rate
The percentage of trial users who pay up. We're running at a ridiculous 25% versus the 10% industry standard—meaning we're still undercharging even after all our price experiments.
"

idk if they're going to let all the sites quoting $60 know
6
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Dormouse on June 01, 2025, 08:31 AM »
Sorry for ranting on and on about this.

Please, please continue.

I am aware that I need to move in that direction with this. Not that I will move quickly.

When I was a student, I saw that the best understanding came from knowing all the details from the ground up, and that thinking consequently became superfast. I always worked to understand my data and processed through the underlying equations and maths before using formulae. I always distrusted blackbox packages. Still amazes me that Stats courses at uni concentrate on formulae, interpretations and packages with minimal effort given to teaching data types or shapes; afaics the most common failures irl come from a failure to understand the basic data. I suppose now they're moving(ed?) to AI.

Which is fair enough. Understanding and appreciating the basics isn't the same thing as wanting to do them manually every single time.

you already did a lot of legwork and have the data.

Legwork 'yes'; have the data 'no'; at least not in family history. But I will be recreating it; fairly straightforward because I do have all the finish points and remember the routes and already knew I needed to do that.

You can run a (smaller) local AI/LLM easily enough.
...
you will be very pleasantly surprised how well those small models perform, when you can let it go loose with your own proper data. And those will not rob you blind with subscription fees, token consumption limitations and possible overcharge fees.

Is local best for this? I know it's very personal data, but it's not private - it's virtually all derived from public databases after all.
Is the downside of the online models mostly to do with cost? In which case, I'd need to compare it with the cost of local. Without being slowed.

And that is fast enough for adept readers. Maybe not for speed readers, but given the state of today, there aren't that many persons anymore that have and/or use that skill.

I suspect I'm fast; idk about speed reader. Usually read 10+ books a week, frequently +++, as well as everything else.

Depending on your GPU hardware. Or lack thereof.

Lack entirely; not being a game player. And even shifted to basic level PCs in recent years because I don't (didn't) need power for anything. I really like my little Geecom mini.


you will see that even these smaller local LLM's are pretty good for helping you out finding what you need, collect this data and "feed" that into an external genealogy database.

You could even find out which research paths were a dead end, or maybe less of a dead end than envisioned, with a few simple prompts. Or tell the AI/LLM that those paths were already marked as a dead end, so not to be investigated (in an much more automated) way.

And that would definitely be good.

token consumption limitations and possible overcharge fees.
...
'LM Studio" (GUI tool for Windows, Linux and Mac) and/or "Msty" (GUI tool for Windows, Linux and Mac) or even "Ollama" (PowerShell/terminal-based text tool for Windows, Linux and Mac). ... LLM web-interfaces (such as 'Open-WebUI') to these tools.
...
LLM model search function build-in. Where I discovered model 'ui-tars-1.5-7b', which is surprisingly sound of logic (without giving it a system prompt to tweak it) given it's size. ... 4 to 5 tokens per second on a desktop with a 10th generation Intel i3 CPU (5 years old by now), no GPU of any kind, a small and simple 2,5" SATA SSD drive and 16 GByte of 3200 MHz RAM.
...
Just need to figure out the RAG solution for your collected data. Tools like 'Rlama' and 'LlamaIndex'

There's a whole new language for me to learn, as well as the content, as well as how to apply it.

--

My current plan is:
  • Gather the data. Organising and reorganising the searches in the browser.
  • Saving each organised sequence (aka "trail") into markdown - both Lattics and Tangent/Obsidian.
  • Using something else to help me visualise it and help me go forward. Might be some sort of data oriented PKM app, or AI or something else.
I appreciate that an AI oriented browser might be able to do the lot. But I'm extra hesitant to trust the whole process to a newly developed program with unknown (possible) pitfalls.


7
I've just experienced my first little glitch when using Lattics. I was just writing a response to Shades when the text disappeared. ctrl-z zilch. I assume that something interrupted its autosave. No settings I can tweak. tbf I've experienced more glitches with Obsidian, Tangent etc. Not recently with Word, and not with Keep.
8
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Shades on May 31, 2025, 07:22 PM »
You can run a (smaller) local AI/LLM easily enough. Depending on your GPU hardware. Or lack thereof. From this and other posts, I gathered that you stored all of your previous research already. That data could be put into a vector database (RAG) and this vector database can then be coupled to this locally running AI/LLM. Once that is done, you will see that even these smaller local LLM's are pretty good for helping you out finding what you need, collect this data and "feed" that into an external genealogy database.

You could even find out which research paths were a dead end, or maybe less of a dead end than envisioned, with a few simple prompts. Or tell the AI/LLM that those paths were already marked as a dead end, so not to be investigated (in an much more automated) way.

Smaller models do tend to hallucinate more than online ones, but if the data in your RAG solution is solid, you'll find there will be no to hardly any hallucination. The "garbage in, garbage out"-concept is very much a thing with AI/LLM. The very large online versions are usually filled with better/more coherent data, making those look good in comparison with smaller models.

But you will be very pleasantly surprised how well those small models perform, when you can let it go loose with your own proper data. And those will not rob you blind with subscription fees, token consumption limitations and possible overcharge fees.

Just get a free tool like 'LM Studio" (GUI tool for Windows, Linux and Mac) and/or "Msty" (GUI tool for Windows, Linux and Mac) or even "Ollama" (PowerShell/terminal-based text tool for Windows, Linux and Mac). All of these also have a server-like function. Meaning you can connect LLM web-interfaces (such as 'Open-WebUI') to these tools. Then you can use your local AI/LLM with any device in your LAN (computers, laptops, tablets, phones, even a smart TV if it has a decent enough browser).

Personally, I went the "LM Studio"-way, because it also has an excellent LLM model search function build-in. Where I discovered model 'ui-tars-1.5-7b', which is surprisingly sound of logic (without giving it a system prompt to tweak it) given it's size. And even manages to output between 4 to 5 tokens per second on a desktop with a 10th generation Intel i3 CPU (5 years old by now), no GPU of any kind, a small and simple 2,5" SATA SSD drive and 16 GByte of 3200 MHz RAM.

Fit such an old PC with a GPU that contains 6 GByte of VRAM and this model can be loaded into VRAM instead. The 4 to 5 tokens/sec output is too slow for person who reads. When the same model is loaded in VRAM, the output goes to around 12 to 15 tokens/sec. And that is fast enough for adept readers. Maybe not for speed readers, but given the state of today, there aren't that many persons anymore that have and/or use that skill.

Sorry for ranting on and on about this. Thought I mention all of the above, because you already did a lot of legwork and have the data.  And in this case, I expect (local)AI/LLM to be a big boon for you. Just need to figure out the RAG solution for your collected data. Tools like 'Rlama' and 'LlamaIndex' are likely to be a great help in finding the right solution and/or help you build your RAG solution, as both can deal with PDFs, images, images in PDF, word and excel documents, text and MarkDown, etc.
9
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Dormouse on May 31, 2025, 05:37 PM »
Perhaps you should take a look at Strawberry browser. Is on invite-basis only at the moment, not free either, because of AI, but the user interface appears to be well suited for (automating) research in combination with AI. Screenshots of the UI you can find here, as well as a complete description, a FAQ, some example animated Gif's, etc.

Thanks. I'll take a look; I'm certainly going to be paying more attention to browsers generally.

doesn't disappoint in the same way.

It may not seem like it, but I'm still very pleased about Horse. I'll admit that I now question whether the wonderful workflow I envisaged was somehow my own invention, but at the very least it was inspired by Horse. It should have been obvious - it was obvious! - but it had never occurred to me before.

Every now and then an interest in family history and genealogy takes hold of me. Because of the big gaps, I cannot remember the precise details of the work I have done or not done - especially failed searches. I know I should make detailed records, but I only ever do it incompletely, sometimes, and I'm unlikely to do better because it is a significant amount of drudgery for the remote prospect of a possible payoff in the distant future. This mostly automates the whole thing.

Off the top of my head, I recognise three types of search/research that seem to crop up in all domains.
  • One is planned and systematic. Defined data gathering, statistical analysis, maths. What most people think of as typical science, though science uses all these types.
  • The second is like a wood sculptor beachcombing for driftwood with potential. Probably one of the most common web activities. Despite being sometimes vilified as mindless collection, the items are often chosen as potential triggers for the imagination.
  • And the third is like the hunt for Bigfoot; you know it's there, you're sure it's there, it's just very, very hard to find and most trails lead nowhere. And you just need a bigger collider.
And it's only the third type that I imagined Horse helping with. Not that it matters. Now that I have the idea, the Vivaldi method will work perfectly well enough.

Family history research is a funny beast. Most of the data you work with is in archives, which constantly acquire new data. So search results may change over time. And sometimes the site will only cough up the data if you search x way rather than y way because search algorithms can be glitchy. The data is often mistranscribed and the original respondents were often misheard, or had limited knowledge, or downright lied. Does this record belong to that person?; does that person, or that record, truly belong in your family tree? Nothing is certain. Everything you 'learn' should have a probability estimate, but all you can do is write it down and keep some type of probability in your mind. You increase probabilities by triangulation. Preferably supported by copies of actual written records and not just transcriptions or someone else's assumptions. You constantly look for stuff to extend, but more often you look for supporting evidence, and even more often disconfirming evidence. A disappointing but useful outcome.

So covering old ground is the norm. And remembering the sites or the precise search terms is hard, but writing down every slight change in location or date range is tedious beyond belief. It had never occurred to me to store that information in a browser. Partly because the searching can seem haphazard - I tire of not finding Gobble Grimstone in Derby, exhaust my supply of variants, Gimstone, Gimston, Gimson, Jemson; and switch to Abel Turkey (probably Tukey oc) in Nottingham; and round and round in circles. Even worse I do find them, but with records that show they're not one of my ancestors and I have to start again. Partly because tabs always seemed haphazard themselves and using multiple browsers didn't help because I never saw them as central to my workflows. Seeing tabs as an outline was a real eye opener; I can shuffle and rename, even use emojis. Reorganise in the browser. And the power is amplified by moving the searches into a PKM notes app.

I appreciate the lure of AI in all of this; genealogy sites are using it directly in a limited way. But I'm also very wary. The data is already probabilistic. AI's internal logic must function around probabilities. I fear a tendency to be overly certain or even a willingness to invent a record that it has decided must exist. Even inventing an image of a Dog Latin birth register from an Irish parish wouldn't be beyond it. And I would have no way of weighing the AI's probabilities.

PS That was interesting. I looked at the Strawberry link. I'd never quite realised that Lattics would just open URLs in one of its own windows.
10
Mini-Reviews by Members / Re: Horse Browser Review
« Last post by Shades on May 30, 2025, 09:46 PM »
Perhaps you should take a look at Strawberry browser. Is on invite-basis only at the moment, not free either, because of AI, but the user interface appears to be well suited for (automating) research in combination with AI. Screenshots of the UI you can find here, as well as a complete description, a FAQ, some example animated Gif's, etc.

There is a limited free tier, and 2 priced tiers. Added myself to its waitlist for the free tier a week or so ago. No clue how long that will take, Did something similar for the Manus AI, and that took 2 months or so. Lets hope that Strawberry doesn't disappoint in the same way.
Pages: [1] 2 3 4 5 6 ... 10next