Nothing of new, just a real implementation, because we don't know how fast is a DB with a keyword as a key. And I can say that is very fast and do not need so much ram, but need hard disk space. Maybe enterprise DB (like Oracle/MySQL/etc.) can handle GB of data better than SQLite, but the system is the same.
Of course, you must find the right program to handle, because some GUI App (like SQLite DB Browser) load the file into ram and need over 1GB for that file of 100MB. The command line version, need only about 3MB instead.
I added a reply to the "ask the expert" thread I started saying there has to be a "worst case scenario" with keys-only db likely to be it. That was yesterday. I see it hasn't cleared the moderator. I think they don't really want to bring up the Achilles Heel. I doubt I'll see my reply.
To really test this out you should have some method that directly accesses the flat file. Compare it for speed vs. overhead. A dummy run of a few MB doesn't mean anything. Just about any manipulation all in ram is going to be fast. We need a comparison of db and non-db access say for an 8 GB flat file of words. Then see what happens.
I would tend to guess the db overhead would not be worth the effort compared to direct flat file access and manipulation for simple search. Also I suspect if you made a 34 GB table of keys, the db would crash on the OP's machine.