topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Tuesday June 10, 2025, 8:03 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Shades [ switch to compact view ]

Pages: [1] 2 3 4 5 6 ... 118next
1
Mini-Reviews by Members / Re: Horse Browser Review
« on: May 31, 2025, 07:22 PM »
You can run a (smaller) local AI/LLM easily enough. Depending on your GPU hardware. Or lack thereof. From this and other posts, I gathered that you stored all of your previous research already. That data could be put into a vector database (RAG) and this vector database can then be coupled to this locally running AI/LLM. Once that is done, you will see that even these smaller local LLM's are pretty good for helping you out finding what you need, collect this data and "feed" that into an external genealogy database.

You could even find out which research paths were a dead end, or maybe less of a dead end than envisioned, with a few simple prompts. Or tell the AI/LLM that those paths were already marked as a dead end, so not to be investigated (in an much more automated) way.

Smaller models do tend to hallucinate more than online ones, but if the data in your RAG solution is solid, you'll find there will be no to hardly any hallucination. The "garbage in, garbage out"-concept is very much a thing with AI/LLM. The very large online versions are usually filled with better/more coherent data, making those look good in comparison with smaller models.

But you will be very pleasantly surprised how well those small models perform, when you can let it go loose with your own proper data. And those will not rob you blind with subscription fees, token consumption limitations and possible overcharge fees.

Just get a free tool like 'LM Studio" (GUI tool for Windows, Linux and Mac) and/or "Msty" (GUI tool for Windows, Linux and Mac) or even "Ollama" (PowerShell/terminal-based text tool for Windows, Linux and Mac). All of these also have a server-like function. Meaning you can connect LLM web-interfaces (such as 'Open-WebUI') to these tools. Then you can use your local AI/LLM with any device in your LAN (computers, laptops, tablets, phones, even a smart TV if it has a decent enough browser).

Personally, I went the "LM Studio"-way, because it also has an excellent LLM model search function build-in. Where I discovered model 'ui-tars-1.5-7b', which is surprisingly sound of logic (without giving it a system prompt to tweak it) given it's size. And even manages to output between 4 to 5 tokens per second on a desktop with a 10th generation Intel i3 CPU (5 years old by now), no GPU of any kind, a small and simple 2,5" SATA SSD drive and 16 GByte of 3200 MHz RAM.

Fit such an old PC with a GPU that contains 6 GByte of VRAM and this model can be loaded into VRAM instead. The 4 to 5 tokens/sec output is too slow for person who reads. When the same model is loaded in VRAM, the output goes to around 12 to 15 tokens/sec. And that is fast enough for adept readers. Maybe not for speed readers, but given the state of today, there aren't that many persons anymore that have and/or use that skill.

Sorry for ranting on and on about this. Thought I mention all of the above, because you already did a lot of legwork and have the data.  And in this case, I expect (local)AI/LLM to be a big boon for you. Just need to figure out the RAG solution for your collected data. Tools like 'Rlama' and 'LlamaIndex' are likely to be a great help in finding the right solution and/or help you build your RAG solution, as both can deal with PDFs, images, images in PDF, word and excel documents, text and MarkDown, etc.

2
Mini-Reviews by Members / Re: Horse Browser Review
« on: May 30, 2025, 09:46 PM »
Perhaps you should take a look at Strawberry browser. Is on invite-basis only at the moment, not free either, because of AI, but the user interface appears to be well suited for (automating) research in combination with AI. Screenshots of the UI you can find here, as well as a complete description, a FAQ, some example animated Gif's, etc.

There is a limited free tier, and 2 priced tiers. Added myself to its waitlist for the free tier a week or so ago. No clue how long that will take, Did something similar for the Manus AI, and that took 2 months or so. Lets hope that Strawberry doesn't disappoint in the same way.

3
Many years ago (2003) there was a very simple drawing tool called 'eve' and 'eve-web'. Very tiny, less than 400 KByte for both of these applications. And yet, you could draw with it, had some tools for making default pointers/shapes, canvas didn't seem limited and was free.

You can download/create libraries of shapes you like to use, which then can be loaded into a new project.

eve was free, eve-web wasn't way back then. But with one of the links I see that eve-web is also available for free.

Thought the links to it were gone, but I still found one that explain much better what it does:
https://bkhome.org/a...see/evewe/index.html

My browser (and computer) is not capable of showing the graphics on these pages. Won't be polluting any of the computers in my care to be polluted with any software coming from Adobe, unless I absolutely have to.
But in case your computer does have Adobe Acrobat or any of the other tools of the Adobe suite installed, you should see all the example drawings from the description and manual.

Here is the manual:
https://bkhome.org/a...nual/evewemanual.htm

Not sure if it will cover all your needs, but at less than 400k you can sure try. HTML pages are way bigger nowadays than that.

The download link for version 3.56 of Eve works from that page, or directly from
http://bkhome.org/archive/goosee/eve.zip

and download for EveWE is at
http://bkhome.org/ar...vewe/users/evewe.zip

4
Can confirm: as a user with member status, I also see the amount of DC credits to my name.

Are you running some scripts and/or extensions in your browser that obscure some parts of on-screen data?

5
Post New Requests Here / Re: Variable-speed repeated button press
« on: January 08, 2025, 06:43 PM »
Below is an enhanced Python version, complements of ChatGPT

Which version of ChatGPT were you using? The free one? Good luck running the Python script generated by that one. The difference between the free tiers of online LLMs, not just ChatGPT, and the subscription models is significant. With the free one you are very likely spending quite some time bughunting...that is, if the code is even working at all.

More often than not, you'll end up better writing it yourself. Even something simple as Docker-compose yaml files ChatGPT manages to bungle up. For that and SQL queries, the free tier of Phind has given me much better answers than ChatGPT did. Do with that information what you will, of course.

6
Post New Requests Here / Re: Variable-speed repeated button press
« on: January 08, 2025, 06:35 PM »
Below is an enhanced Python version, complements of ChatGPT

Python is good enough for prototyping. Unfortunately, around 90%  of people is of the mindset that there is nothing more permanent than a temporary solution. So Python gets a lot of "love" from many. Far too many. And the LLMs feed the desire for even more Python like mad.

Ugh. If Python is the solution, I don't even care about the problem it is supposed to solve. Having experienced the dependency hell, which far too many Python-scripters managed to manifest with their creations, makes me very hesitant to use their product at all.

Sure, there are many Python scripters who do know how to write proper Python, but many more don't. And that has grown into a bitter dislike for anything Python. Many programmers (C++/C#) do not appreciate Python either, as it may look simple, it also comes with some deep-rooted (and significant) flaws.

Ah, the times I asked an LLM for scripts in PowerShell or bash, and got python instead, I cannot count on all the digits on all my extremities anymore. Even when indicating I absolutely don't want Python script as an answer to my request...I still get Python from LLMs.

Nah...no Python for me, if I can help it. While realizing the solution in my previous post isn't for everyone, it suits my needs perfectly.

7
Post New Requests Here / Re: Variable-speed repeated button press
« on: January 07, 2025, 10:53 PM »
Imagine repeatedly pressing the Page-Down key.  It can get old fast.  Poor index finger.

Agree with you there about automatic scrolling.

My solution is/was both much more involved, but also simpler.

My take:
Really liking the Calibre software, but I found that keeping up the library between multiple computers is a bit of a hassle. And as browsers keep taking over my PDF viewer preferences (darned Edge!!!!) I thought, let's just see if there is an online version of Calibre that I can self-host. Guess what, there is indeed someone who did do this. As I have a spare desktop, I turned that into a Proxmox node. On that node I created a Linux VM (Ubuntu Server LTS, so no GUI of any kind). And then I followed the instructions from here: calibre-web

Actually, I have 2 spare desktops, the other one has the Proxmox Backup System installed onto it. This makes automated backups really simple. So I don't have to worry too much about messing the Calibre-web VM up or lose the books that are now stored inside it.

Automatic scrolling in a web-browser is already supported for years, so it is simply visiting the calibre-web instance in my LAN with a browser from any computer, enabling automatic scrolling, adjust the rate to one that is preferred at that moment and read at my leisure.

So you can see that my solution is way more involved, but also simpler...in a way.

On a somewhat related side-note:
Might even see if there is a (self-hostable) LLM out there that can "read aloud" the book by simply pointing it to the URL that calibre-web assigned to the book I want it to read to me. As sometimes I only have bandwidth enough (mentally) to listen to a book, while I'm doing something else. If listening to a book or almost any other type of documentation, professionally or otherwise, is more your thing, you could try and make the book(s)/document(s) you want to "hear" available to Google's NotebookLM.

It "ingests" the content and turns it into a podcast between 2 AI persons (male and female), which sound very convincing discussing the document(s) you provided.

8
Living Room / Re: want to get a new 3d printer in 2025
« on: January 01, 2025, 05:48 PM »
My boss is into 3D printers. He bought his first kit some 7 or maybe 8 years ago. A brand no-one has heard of, and he had to figure almost everything out himself as documentation for it was very sparse. As were parts. He had to build sections of it, that enabled him to create the parts he needed to complete the base of this printer.

Some 4 years ago, he got a resin printer From Elegoo: (some name) 2.

With that one I played around too. These were (and are) nice 3D printers to get your hands wet on. You shouldn't though, getting your hands wet with 3D printing resin is very hazardous for your health. And it has a very distinct and very "chemical" smell. Gloves and face-mask are recommended with these printers. Software and getting acquainted with 3D printing concepts, that I found to be very easy with resin printers. And the print quality of those prints...really amazing in detail and smoothness.

The first 3D printer only had a bed of 25 x 25 cm (10 x 10 inch), don't know the height. The resin printer is even smaller, 10 x 15 x 15 cm or so.

But last year he bought a new Voron printer (model 2.4, I believe) with a bed of 35 x 35 x 35 cm. He is still busy building that one up to his own desired specs and needs.

So, if you are of the tinkering "tribe", Voron's are very capable and versatile. If you are not, 3D resin printers are nowadays very fast and pretty capable too. And also have much larger print beds and way higher printing resolution than the Elegoo one I have here at my disposal.

If resin is not your liking, or you want to be able to make prints using different/stronger/tougher material or with more than one color, Bambu and Prussia models are your best/safest bet.

Bambu printers, especially the multi-color ones, do produce quite some 'poop' and their models are also kinda picky about the brands/rolls of print material you need to buy to use them properly. That could be something of a concern.

A youtube channel about 3D printers (and printing) I found to be quite interesting is called 'Uncle Jessy'.  :P

9
DC Gamer Club / Re: Valve Announces Steam Deck: A Handheld PC
« on: December 29, 2024, 07:34 PM »
Well, I got maybe 10 games on Steam and 192 on GoG. Besides a small physical collection (5 or so), of course. Now I haven't owned (or rented) a console since PlayStation 2. 'PC master race' is the meme for that, I believe.

Also, haven't even bothered to install the Steam client on any of my computers till now.

Most of those games were free, the rest was bought when on offer, such as 85% off or better. The physical collection was bought full price.

So I spent for about 8 full price games, but ended up with 200+ games. The 'money spent' counter on your stats, I would indeed take with a (very big) grain of salt.  :D

10
DC Gamer Club / Re: Latest Game Giveaway
« on: November 29, 2024, 09:23 PM »
Couldn't help but notice that the main contributor to this thread isn't receiving all too many thank you's for those contributions.

And: At GoG I have now almost 200 games in my collection...and I guess 45% of those games were obtained through notices like yours, Deozaan.

So you can believe me when I say: Many, many thanks!  :Thmbsup:

11
General Software Discussion / Re: Looking for QuickBooks Alternative
« on: November 05, 2024, 05:55 PM »
Not sure how useful this software could be with regards to QuickBook files, and if Power BI doesn't fit the budget: FlowHeater

Via adapters you can transfer data from one format to another. Comes in both a free version as well as a paid version.

12
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 26, 2024, 06:17 PM »
You could watch a TV-series, called 'Beacon 23'. Which plays in distant future, where humankind uses manned beacons for interstellar travel. And you get a hint of what life is like in such a beacon., which belong to corporations that have AI on those beacons themselves, and tend to provide a personal AI to the person manning the beacon.

Might sound boring for a sci-fi series, but it really isn't. Quite a bit more serious than your movie idea. Which I think would go over well  :D

13
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 21, 2024, 09:09 AM »
Don't stare yourself too blind on ChatGPT, because it isn't the "all-in-one" solution for everyone or every use-case.

So here are some (mainstream) alternatives:
- ChatGPT (pretty much on par with Gemini for my use-cases).
- PHind (did well with Powershell requests and Oracle requests I made).
- v0 (relatively new player, that looks promising. Bit of a nag about getting a subscription with them, but that is it).
- Gemini (pretty much on par with ChatGPT for my use-cases).
- CoPilot (no experience with, besides unconsciously activating it by misclicking in the Edge browser).

- Replicate (site that lets you try out many different models, some free, some at costs).

14
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 19, 2024, 12:40 AM »
Hmmm... LM Studio won't work for me either. After poking around in the various tabs of the GUI it seems my CPU is the problem. 😭

[ Invalid Attachment ]
Too bad about your CPU.

In all honesty, I did not expect that, as I have used both Tabby and LM studio on a computer with an AMD APU (9700), same GPU and 16 GB RAM. That APU was designed in the 'bulldozer'-era of AMD, but made to fit on the AM4 socket. That era may not be 2nd gen i7, but isn't that far off either. Tabby and LM Studio worked both on that CPU, but it struggled when more than 1 person was using the LLM functionality. Hence I moved it to the next best (or should I say worst) thing, which was the computer with the i3 10100F CPU. Since then, Tabby is "smooth" sailing" with 5 people accessing it.

Here in Paraguay it isn't that hard to find motherboards and CPUs from the 10 gen for cheap. It is also not really a problem to get 13th and 14th gen gear, but that is more expensive than what it costs in the U.S. or Europe. If I remember my parts prices correctly, that i3 computer would have costed about 425 to 450 USD, with the GPU as the most expensive part.

Perhaps it is an option to get/trade your computer (without GPU) for another older computer with a 4th/5th/6th gen CPU for cheap? Refurbished gear or something from a company that dumped their written-off gear at a computer thrift store? For getting your feet wet with LLMs/AI  that could be useful, while also not breaking the bank.

15
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 18, 2024, 10:49 AM »
By the way, in this environment there is is still a i7 CPU 1st generation active as a database server. It has 32 GByte of RAM and SSDs. Yes, it is even more ancient that your second generation i7 system, but it doesn't feel slow in anything it needs to do. So my boss doesn't want to change it, mainly because of (grandfathered) Oracle licensing-costs and Oracle being slowly phased out here.

So, ancient computer or not, as long as you don't game on those computers, they are still quite useful. And energy costs are not that big of a concern here in this part of South America, where the whole country runs on hydro-power.

16
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 18, 2024, 10:32 AM »
Sorry, I thought I posted more relevant details. :-[

My machine is ancient. Core i7 2700K. GTX 670 (6GB). 👴

I installed the nVidia tools and it let me install Cuda 12.x but I don't know if my GPU supports that.

The command I'm trying to run is very slightly different from yours (StarCoder-1B instead of 3B), taken from the Windows Installation documentation:

Code: Text [Select]
  1. .\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda

But after counting up for about 10 seconds it starts spitting out messages like the following every few seconds:

Code: Text [Select]
  1. 10.104 s   Starting...←[2m2024-10-18T08:32:25.331474Z←[0m ←[33m WARN←[0m ←[2mllama_cpp_server::supervisor←[0m←[2m:←[0m ←[2mcrates\llama-cpp-server\src\supervisor.rs←[0m←[2m:←[0m←[2m98:←[0m llama-server <embedding> exited with status code -1073741795, args: `Command { std: "D:\\Apps\\tabby_x86_64-windows-msvc-cuda122\\llama-server.exe" "-m" "C:\\Users\\Deozaan\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }

I left it running for about 2 hours and it just kept doing that.

More help would be appreciated.
First, it may be handy to list the specifications of the computer I use with Tabby (for reference):
CPU: Intel i3 10100F (bog standard cooling, no tweaks of any kind)
GPU: MSI GeForce 1650 (4 GB of VRAM, 128-bit bus, 75 Watt version (without extra power connectors on the card))
RAM: Kingston 32 GByte (DDR4, 3200 MHz) As of this week, I added another 16 GB RAM stick, just to see if dual channel was an improvement or not. Till now I didn't notice it. 
SSD: SSD via SATA interface (Crucial 500 GByte)
All inside a cheap, no-name "gamer" case

There are 3 things to try:
  • Tabby without GPU support
  • Tabby with GPU support
  • No Tabby

Tabby without GPU support:
You will need to download a (much smaller) Tabby, extract it and make sure that you have as much of the fastest RAM as the motherboard supports in your computer to make this the best experience possible.
Start it with:
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
Less than ideal, but you would still be able to (patiently) see how it works.

Tabby with GPU support:
This overview from NVidia shows which type of CUDA is supported on their GPUs.
My GPU is a 1650, which supports CUDA 7.5
Your GPU is a 670, which supports CUDA 3.0

This overview shows you need NVidia driver 450.xxx for your card if you use CUDA development software 11.x.
You can get v11.7 of the CUDA development tools here. As far as I know, you can go the NVidia website and download a tool that identifies your card and the maximum driver number it supports. If that number isn't 450 or higher, than I'm pretty sure that the Tabby version for CUDA devices won't work.

In case you can download a sufficient driver for your GPU, download the 'tabby_x86_64-windows-msvc-cuda117.zip' archive, extract it and start it with:
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda

In case you cannot download a sufficient driver for your GPU, you could still try a Tabby version that supports vulkan devices. Vulkan is supported by NVIdia/AMD/Intel GPUs and is often used to make Windows games work on Linux. As your GPU is from around 2012 and Vulkan was made relatively recent, I don't know how far back Vulkan support goes. You might be lucky though. Anyway, download the 'tabby_x86_64-windows-msvc-vulkan.zip' archive, extract it and start it with:
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device vulkan

No Tabby:
If this also doesn't work, than you have to come to the conclusion that your GPU is simply too old for use with LLMs/AI and that you are relegated to software that provide CPU-only access to LLMs/AI. And in that case, I recommend another free tool: LM Studio. This tool, in combination with the LLM 'bartowski/StableLM Instruct 3B' is my advice. This software is very versatile, takes about 1 GB of RAM and the LLM takes about 4 GB of RAM, so you'll need to have 8 GByte or more in your computer for LM Studio to work halfway decent.


17
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 17, 2024, 09:49 AM »
I tried getting Tabby to work and it wouldn't start up. It said "Starting..." and then spit out warnings every 5 seconds or so, forever.

How are you running it?

In my environment, Tabby runs on a standard Windows 10 computer with a GeForce 1650 card.

Downloaded the 'tabby_x86_64-windows-msvc-cuda117.zip' from Github, extracted it and I created a windows batch file to start it up with the models that fit into the VRAM of my GPU (which is only 4 GByte). The documentation provides examples on how to start Tabby. I use the 'cuda117' archive, because of the extra NVidia cuda development software I needed to install has only support till that version of cuda. 

The format I use to start Tabby:
.\tabby.exe serve --model StarCoder-3B --chat-model Qwen2-1.5B-Instruct --device cuda

This is the smallest combination of models you can use with Tabby. It supports more models for Chat and Code, but not as many as other local AI tools do. So, if you have a 30xx card or better, preferably with more VRAM, use better models.

The first time you start Tabby, it will need to download the model for Chat and the model for Code. So the first boot will take quite some time, if your connection isn't all too great. You'll find these downloaded models in C:\Users\<your.user.account>\.tabby.

The start procedure, when finished, shows a Tabby logo in ASCII graphics and tell that it is accessible on address: http://0.0.0.0:8080
Once that text shows up, you can use any computer in your LAN to browse to that address and start configuring it. And with that I mean create the first admin account. The free/community version can be used by a maximum of 5 users at the same time.

You can either continue with configuring other accounts, mail server (for invites), Git repo's etc. or go back to the web interface and see for yourself how responsive it is. There are more instructions in the web-interface, in case you want to use Tabby directly in VSCode, JetBrains or Vim.

Did follow those with VSCode and its Tabby extension. Works like a charm.

** edit:
If you need more help, I have just finished a more comprehensive manual for work.

18
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 14, 2024, 08:35 AM »
An addition to my earlier 'tabby' post:
You can link it to any git repo, whether it is locally hosted or in the cloud and it will use that repo to produce code in similar style. It can also do this with a (large) document.

There is a 'tabby' extension for Visual Studio Code, so you can use 'Tabby directly in VSCode. And when hooked into your repository, VSCode can automagically autocomplete large(r) sections of code in the style you want.

'Tabby' works ok with a NVidia card that only has 4 GByte of VRAM, but it will load only the smallest model for chat and the smallest model for producing code. Which will give reasonable Python and bash support, but not much else.

If you have a NVidia card with 8 GByte of VRAM or more, you can start playing with the 13B models in 'tabby's repertoire, which support more coding languages and/or better chat model.

Just add # in front of your request and it will look in the repository, add @ in front of your request and it will consult the document you have coupled with 'tabby'.

19
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: September 10, 2024, 10:14 PM »
Found today another AI "toy" to play with.

It is called tabby and you'll find it on GitHub.

Tabby is indeed an assistant and one that you can self-host. No matter where your Linux, MacOS or Windows computer is running (on-prem/hybrid or cloud) it will work. Instructions to download and run the software are very easy. It will download 2 (smaller) LLM's from Qwen and StarCoder, if it doesn't find these on your system.

Currently I'm testing it with a computer based on a pre-ryzen AMD APU that AMD adjusted to fit on motherboards that support Ryzen 1st gen till 3th gen. That computer also has an old NVidia GeForce 1650 card which has (only) 4 GByte of VRAM on it. And yet, both LLM's fit in there. The website has a listing of which LLM's are supported and their requirements, including the required NVidia development code. It might all sound complicated, it really isn't.

Once you have it running, tabby becomes a server on your computer. Acces it by entering http://localhost:8080 in a browser on the computer that hosts tabby. Or use any other computer with a browser in your network to visit: http://<ip address tabby>:8080

You will be asked to create an admin account the first time you access it. Tabby comes with a community license for free (till 5 users). They also have subscription plans if that is more your thing.

My test machine could be considered crippled. However, tabby performs really well. Even on my testing clunker it is almost as fast as ChatGPT. This amazed me to no end. Sure, the models are small and I have had hardly any time to really check how useful the answers it provides truly are.

But the responsiveness and ease of working with the community version, that was a (very) pleasant surprise. So I thought to mention it here at DC.

Oh, while it comes with its own web interface, in there are links that point to ways on how to incorporate tabby into editors like VSCode. If that is more to your liking.

20
hoping that the modern "gaming" mice will be more resistant.
After years of using Logitech M705 mice, where newer editions started double-clicking when not intended because of poor switches, I've since then bought the Razer Basilisk X, and that has proven to be of better quality, and still big enough for my not so small hands, but not as bulky and pricey as the Logitech MX Master. Another reason not to choose Logitech again was several reports of their sometimes poor quality from other ppl (colleagues, forum, tech-news).

Replacement of switches was not an option? If mouses with your preferred shape/size are expensive enough, replacing switches (with better models) could be a consideration. I have done this for my own Logitech mouse. I like it's size and the feature to add/remove weight via a removable 'cassette'.

21
Where has the website gone?

It was behind a .tk domain. If I remember correctly, that country took back control of their .tk extension from the registrar that used to manage it. Not only cranked up prices, but a need for residency as well.

Domain provider 'FreeNom' used to hand out domains with .tk extension for free. They don't do that anymore, as there was much abuse/spam/malware being spread from domains that were managed by the FreeNom registrar. The internet can't have nice things, apparently.

And, to be more on point regarding text editors:
There is Zed. AI helper included. And a lot of other features that may prove to be very useful for projects that involve more than one developer. Available for free on Linux and MacOS. They seem to be working on a Windows version as well, They do provide the source and instructions to build it yourself on Windows. The creators from the text editor 'Atom' are behind this project.

22
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 03, 2024, 03:25 AM »
Which assistants have you tried, because I personally know quite a few (and even work for one) that are actually incredible (especially for autocomplete, but also for quickly getting code snippets, answers, bug fixes, code smells, etc)
-KynloStephen66515 (August 02, 2024, 04:50 PM)

Not assistants per se, but I have been using a tool: 'LM Studio' to run 8 LLM's locally. This tool provides an easy way to download LLMs, use one or more of those in the provided chat screen and allows you to run one or more models (at the same time) as a server, which you can access via an API that uses the same form as the OpenAI API.

Right now I'm most impressed with model 'bartowski\StableAI Instruct 3B'. It doesn't take up that much RAM and responds surprisingly well in CPU-only mode, even on a i3 10100F CPU. You can also set it to use the available GPU (NVidia/AMD) if that has enough memory to offload one or more models into. And it allows you to play with quite some model-specific settings for the LLM's you load into memory. LM Studio is freeware.

Sometimes I verify the results bij filling in the exact same prompt into ChatGPT (v3.5, I think that is the free one) and the locally running StableAI model. ChatGPT answers show up faster and usually have a lot more words to convey the same message.

Basic script generation works quite well in both, but ChatGPT can deal with a bit more complexity. Still, for my purposes, the StableAI model hasn't been too far off ChatGPT or too slow in comparison.

The thing I am looking for is a relative easy way to train the StableAI model I have with company-specific documentation, our script language and documentation portals. For that purpose, the open source tool 'LlamaIndex' appears to be very interesting.

Once I can train the LLM I have, turning my local AI instance into a proper personal AI Assistant shouldn't be too much of a problem.

23
Living Room / Re: Mouser in the movies
« on: July 26, 2024, 09:35 PM »
Watching "Free Guy" and noticed one of the characters is named Mouser.
I couldn't resist breaking the rust off of my login to comment. 😁😁

I hope everyone is still doing awesome, I love seeing a non Windows section, especially after the crowdstrike fiasco. Yes, I know it wasn't a Windows specific failure. It does speak to the fact that Windows is unfortunately the primary target for development.

From other news sources I understood that about a month before the current CrowdStrike fiasco, a similar update was pushed by CrowdStrike to Linux servers. With practically the same result (kernel panic instead of a BSOD).

Except it was mostly caught on test systems at companies, not their production servers. And this event was also duly reported back to CrowdStrike. Who apperently didn't learn (enough) from that mistake and did their roll-out to Windows systems too.


24
Official Announcements / Re: New server/OS update
« on: July 04, 2024, 01:21 AM »
No worries from this end.  :)

As it happens, there was a pretty severe CVE (CVE-2024-6387) a day or two ago. As it was about SSH, I'm now also busy to update and migrate Linux servers in my care. And on more than one occasion, I have found that it would have taken less time to recreate a server than it does to migrate the same server to a more up-to-date version of the OS.

Most of the servers I manage are set up to for a single purpose, configuration files are backed up separately, setups are documented well and where possible, there are (version controlled) automated installation scripts. Likely that affects my opinion on the matter.

Some of those single purpose servers took almost 3 hours for an OS migration. So if your server is setup for multiple purposes, a migration or rebuild can take many more hours. Well, more than one would expect anyway.

25
Living Room / Re: Gadget WEEKENDS
« on: April 20, 2024, 06:21 PM »
Finally got the ZimaBoard yesterday. It's aesthetics I like. I was using an 12+ year old PC as a OPNSense router, but it failed after a brown-out. Managed to get it back up and running and ordered this ZimaBoard as a replacement. I'll use the 4-port NIC from the old system with the Zimaboard instead. I bond 2 internet connections from different ISPs. Forum post told me that people like the Zima gear to act as their router after adding non-Realtek NICs to the unit.

Got the 8GByte one and I have played with the CasaOS that comes with it. It all works decent enough. If you are a bit patient and don't visit intensive websites, It is practically good enough as a replacement for a normal computer. Needed to get a cable that converted the mini display port to a more useful type of connection in these parts of the world. Once I did that, I connected an SSD to the device and that makes quite a positive difference. The SSD had still a Linux Mint installation on it and after the a somewhat lengthy first boot, it booted and worked fine.

As far as I know, Raspberry Pis are much more constrained regarding available computing resources, so how useful those would be for my particular use case, I do not know. A friend of mine abandoned his RPi 2, he's totally into ESP32 devices now. He's making all kinds of measuring devices with those in an attempt to automate his home. He got an ancient massage chair from NL Replaced motors and redid all the electrical logic with an ESP32 instead of repairing what was there, programmed a web-interface in Home-Assistant for that ESP32 device and now he can control that massage-chair via his computer/laptop/phone. Works wonderfully well.

ESP32 can't do much computationally. But they are very versatile. And for the 2 to 3 USD cost-price per unit, much more useful than his RPi2. Especially in combination with Home-Assistant and its 'Node-Red' extension/plug-in.

Pages: [1] 2 3 4 5 6 ... 118next