topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday January 2, 2026, 1:50 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - KynloStephen66515 [ switch to compact view ]

Pages: [1] 2 3 4 5 6 ... 145next
1
For anybody that tried to visit the domain in the last 24 hours, I'm an idiot and made a change to .htaccess that broke it - fixed it now!

2
🎨 Chromivra: The Definitive Color Intelligence Platform
Professional-Grade Analysis. Perceptual Accuracy. 100% Client-Side Privacy.
---------------------------------------------------------
🚀 What is Chromivra?
Chromivra is a next-generation color intelligence platform designed for professionals who demand mathematical precision without sacrificing privacy. Unlike traditional color extractors that upload your assets to a cloud, Chromivra performs 100% of its calculations locally in your browser.
🛡️ Privacy & Stability First
  • Zero Uploads: Your images never leave your local machine. All processing happens via HTML5 Canvas and WebWorkers.
  • Mission-Critical Stability: Engineered with defensive programming, including NaN-guards, infinity protection, and zero-division checks in every mathematical module.
  • No Tracking: No external APIs, no cookies, no telemetry. Just pure tools.
🔬 Technical Deep Dive
The Science of Accuracy:
We don't just "guess" colors. Chromivra implements industry-standard perceptual color science:
  • Delta E Accuracy: Support for CIE76, CIE94, and CIE2000 algorithms for sub-pixel perceptual difference tracking.
  • Precision Conversions: Ultra-precise transformations between XYZ, LAB, LCH, HSV, HSL, and CMYK color spaces using D65 illuminant standards.
  • Gamut Analysis: Visual coverage reports for sRGB vs. Display P3 color spaces.
🔓 Transparent & Open Math
No "Black Box" algorithms or mysterious "AI magic."
We believe that professional tools should be auditable. The mathematical foundation of every module in Chromivra is completely open and documented. We provide detailed explanations and formulas for every calculation, from WCAG contrast ratios to Delta E derivations, ensuring you can trust the results for production-grade projects.
Professional Extraction Algorithms:
Choose the right logic for your asset:
K-Means Clustering | Median Cut | Octree Quantization | Histogram Analysis | DBSCAN Density Analysis
💎 Enterprise-Grade Modules
1. Brand Strategy & Intelligence
  • Brand Personality Scoring: Algorithmic assessment of Modern vs. Traditional and Professional vs. Playful vibes.
  • Industry Fit Analysis: Scores your palette against 6 key sectors: Tech, Healthcare, Finance, Creative, Food, and Fashion.
  • Golden Ratio Harmonies: Fibonacci-based color sequence analysis.
2. Psychological & Cultural Profiling
  • Emotional Impact Assessment: Statistical mapping of Energy, Calmness, Warmth, and Trustworthiness.
  • Global Cultural Context: Meaning and symbolic analysis across 50+ countries/regions.
  • Seasonal Associations: Metadata analysis for Spring/Summer/Autumn/Winter vibes.
3. Accessibility & Vision Science
  • WCAG 2.1 Compliance: Full contrast matrix for all color pairs in your palette.
  • Vision Condition Simulator: Real-time simulation of 8+ conditions, including Protanopia, Deuteranopia, and Tritanopia.
  • A11Y Recommendations: Automatic suggestions to bring failing contrast pairs into compliance.
🛠️ Workflow & Integration
Export Suite (25+ Formats):
  • Design: Adobe ASE/ACO, GIMP GPL, Sketch Palettes, Figma Variables.
  • Development: CSS/SCSS/LESS Variables, JSON Data, Swift/Kotlin Objects.
  • Executive Reports: Professional PDF Brand Kits and Executive Summary generation.
Power User Features:
  • Keyboard Shortcuts: C (Copy), E (Export), H (History), T (Toolbar), Ctrl+S (Save).
  • Floating Action Toolbar: A modern, radial FAB menu for swift mobile navigation.
  • Palette Sharing: Instant Base64-encoded URL sharing for collaborative reviews.
  • History System: LocalStorage-based history for up to 50 palettes with favorite marking.
Supported Formats: JPG, PNG, WebP, GIF, BMP, SVG, TIFF, AVIF, HEIC (Up to 25MB).
Explore the Future of Color Analysis:
https://chromivra.com
Designed for Excellence. Built for Privacy. Released for You.

Oh, and by the way....It's completely free  ;)

1.png
2.png
3.png
4.png
5.png
6.png
7.png

3
OK, now it defaults to Groq if you don't have any specifically set, but you can do it in Git Secrets if you want to set an Ollama BASE_URL for example:

# 3. Configure GitHub Settings > Secrets and variables > Actions
#   - New Repository Secret: GROQ_API_KEY (or LLM_API_KEY)
#   - New Repository Variable: LLM_BASE_URL (Optional, for custom providers)
#   - New Repository Variable: LLM_MODEL (Optional, e.g. "llama3")

API key is optional if you don't have one for your locally hosted models

4
Looks interesting, so thanks  :Thmbsup:

As I have been playing around with the latest (and largest that fits) Mistral LLM in my local setup, it produces quite useful results with code. That gives me the impression that it would also do great with generating documentation from code.

Instead of Grok, can I use local LLMs instead?



Groq* (Grok is X.com, groq is a different provider)

That aside, it wouldn't be hard for me to add this, however, I'm not sure how it would work given you'd need to have your local LLM publicly available in order for GitHub to reach it and do stuff

What sort of workflow are you thinking?

5
Repo: https://github.com/Kynlos/Akari-Flow


Intelligent CI/CD Documentation & Analysis System

Why Akari Flow? 
"Documentation is the code." — Everyone says it, but nobody actually searches the code. In reality, documentation rots the moment it's written. Developers hate writing it, and teams struggle to maintain it. The cost? Slower onboarding, silent breaking changes, and critical knowledge locked in people's heads. 
This system aims to eliminate documentation debt entirely.

Overview 
Akari Flow revolutionizes code documentation by automatically analyzing code changes with AST-based precision, creating diagrams, generating docs, detecting breaking changes, and notifying teams.

✨ Key Features 
  • Automated Code Analysis – Breaking change detection, impact analysis, quality scores, security scanning. 
  • Intelligent Documentation & Visualization – Auto docs, architecture diagrams, static site generation. 
  • PR Enhancement & Agentic Bot – Smart PR comments and safe search/replace automation. 
  • Multi-Platform Notifications – Discord, Slack, Pushbullet integration. 
Quick Start 
Prerequisites: 
  • GitHub repository 
  • Groq API key (free tier available) 
  • Python 3.11+ 

5-Minute Setup: 
git clone https://github.com/Kynlos/Akari-Flow.git
cd Akari-Flow

# Copy workflows into your repo
cp -r .github <your-repo>/

# Add required secret
# GROQ_API_KEY = <your-groq-api-key>

Push a change and the magic starts.

CLI Tool (Local Usage) 
Install dependencies and run:
pip install requests markdown

python cicd-cli.py full-run

Generate docs or site with dedicated commands.

Configuration 
Language parsing rules and LLM model settings are fully configurable.

Usage Examples 
Agentic Bot – Comment on PRs to modify code safely. 
Breaking Change Detection – Signature changes trigger alerts.

Documentation Portal 
The system builds a static site with navigation, dark mode, and diagrams.

Architecture 
The pipeline runs as GitHub Actions — parse code, generate docs & diagrams, build site, send notifications, then commit back.

Support 
If you find this project useful, consider sponsoring development.

About 
Intelligent CI/CD: Automated documentation, AST-based analysis, breaking change detection, and dynamic visualization powered by AI.

6
General Software Discussion / Re: Chat app recommendations
« on: October 14, 2025, 02:42 PM »
IRC?  ;D

7
There are other DDOS protection services out there. Maybe Mouser could look into them.

Cloudflare is one of the biggest and best on the market.  Checking you are human is quite a normal process for it and the service is trusted by practically every major enterprise that exists.

It will happen until the attack has stopped.

It's a momentary inconvenience for you, but one that allows the site to stay online.

8
It was another traffic attack on DC. This has been mitigated with Cloudflare protection and should make things a bit more stable now :)

9
N.A.N.Y. 2025 / Re: NANY 2025
« on: December 19, 2024, 02:28 PM »
Ok, then I am appealing to mouser on this.
Mouser can you please take note and force-change his irreverant picture of the Man Jesus Christ.
If he disrespected Him, it is worse than disrespecting my own mother or father.
He has stated that he will not change it.
He has disrespected God Himself, Jesus the Son, Christianity, Islam and any other person who holds Christ in high regard.
Thank you,
Jim Kinsman


This is not a relevant discussion for here and you absolutely can't dictate that another user change their profile picture based on a highly subjective opinion of it.  This is not a website to dictate your religious beliefs or impose them on other people.

Tuxman has had this profile picture for as long as I can remember, and your "outrage" suddenly came after not getting an answer that you wanted (which, by the way, nobody said no, they just said it might be better to just mention it in NANY, and post about it elsewhere on the forum to highlight the freemium nature of it.  It might be helpful for you to take a second look at mousers answer, linked here: https://www.donation....msg455026#msg455026 - relevant quote added below.)

By all means, share it on DonationCoder, even mention it in the NANY thread, and don't feel bad about creating commercial software.  Even make a post linking to it in the NANY section.


Since this thread has become what it has, I will lock it.  Feel free to make another (not related to this discussion) thread in the NANY 2025 section, or elsewhere.

10
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: December 06, 2024, 02:58 PM »
The reason we limit to 200 per month is because:

A: We are the lowest price for unlimited
B: We allow use of Claude 3.5 Latest (the best model on the market) and GPT4o  :-*
-KynloStephen66515 (December 05, 2024, 02:40 PM)

A. Perhaps, but as a free user, it doesn't affect me.
B. And so does Codeium. As I said, you can use GPT-4o/Claude 3.5 Sonnet, you're limited to 20 chats per day for free.

If you're a paid user or part of a large team, Cody AI might make sense (considering it's cheaper), but as a free user, I haven't found anything better than Codeium.

Honestly, if you love Codium, it's worth the upgrade to paid just for access to Windsurf - it has its downsides but nothing that I imagine won't be fixed fairly quickly with updates!

11
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: December 05, 2024, 02:40 PM »
I've been trying out Windsurf recently and it has huge potential to be a great product, but it's not quite there yet.  Between the constant issues with Codium hitting their usage caps with Anthropic, and tool_use errors, it still has some way to go.  That being said though, I have enjoyed it so far (even if the "oopsie" errors are half way through it editing a file it refuses to elaborate on so you don't know what it broke).

I do mainly use Cody AI (by Sourcegraph), which is the one I work for, mainly, but that doesn't mean I can't enjoy seeing what other tools are doing  :D ;D
-KynloStephen66515 (December 04, 2024, 07:04 AM)

I haven't had the chance to test Windsurf yet. Normally, I use Codeium through the web browser, or in VSCode. I mostly use it for small tasks (regex, snippets, SQL optimizations..etc).
I never tried Cody AI, but considering that the free tier only offers 200 chats per month, it feels limited compared Codeium.
Codeium free tier offers unlimited chats if you use their Base model (Llama 3.1 70B), and if you switch to GPT-4o/Claude 3.5 Sonnet, you're limited to 20 chats per day.
For my use case, I rarely use anything other than the Base model, though.


The reason we limit to 200 per month is because:

A: We are the lowest price for unlimited
B: We allow use of Claude 3.5 Latest (the best model on the market) and GPT4o  :-*

12
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: December 04, 2024, 07:04 AM »
I've been using Codeium for quite some time now.
It can be used in the browser directly, as an extension for VSCode/VStudio, or through their own editor Windsurf.
Best of all, it's completely free. They do have paid subscriptions with more features, but I never felt that I needed to upgrade.

PS: I'm not affiliated to Codeium in any way.


I've been trying out Windsurf recently and it has huge potential to be a great product, but it's not quite there yet.  Between the constant issues with Codium hitting their usage caps with Anthropic, and tool_use errors, it still has some way to go.  That being said though, I have enjoyed it so far (even if the "oopsie" errors are half way through it editing a file it refuses to elaborate on so you don't know what it broke).

I do mainly use Cody AI (by Sourcegraph), which is the one I work for, mainly, but that doesn't mean I can't enjoy seeing what other tools are doing  :D ;D

13
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 27, 2024, 06:30 PM »
Raspberry Pi Pico W's onboard LED

Not to stray too far off-topic, but you seem to be correct, as per: https://forums.raspb...ewtopic.php?t=358964 and https://forums.raspb...ewtopic.php?t=336836

14
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 27, 2024, 08:20 AM »
Thanks Kynlo. While I was adding these to my bookmarks, I decided to try out LobeChat's hosted version. It went something like this (highly paraphrased):


Q: Does the onboard LED of the Raspberry Pi Pico W support PWM to adjust its brightness?

A: Yes, it does.

Q: Can you show me some example code that will fade the LED?

A: This code will do that for you.

[ 15 minutes trying to get the code to work ]

Q: The code isn't working. What might be wrong?

A: Try these troubleshooting steps.

[ 15 minutes trying all the troubleshooting steps ]

Q: Are you absolutely sure the onboard LED supports PWM?

A: My apologies. The onboard LED does not support PWM.

Q: Can you show me example code to simulate PWM to adjust the LED's brightness in software rather than hardware?

A: This code will do that for you.

[ It works! ]

Pretty helpful when it doesn't lead you on a wild goose chase with wrong information.


I'm surprised there hasn't been a movie made yet about a group of people with a robot/AI companion who all end up stuck in a cavern and they ask the robot/AI
to help them find a way out and it unintentionally states incorrect information as fact and based on that information (and future queries to the robot/AI that don't challenge the initial assumption) the humans travel through a treacherous path where many people are injured or die until finally someone asks "Are you sure this is the best course of action?" and then the robot/AI says "My apologies. We can just take the ladder near where we started." ;D

What model did you use for this?

I'd highly suggest only using Claude-3.5-Sonnet-Latest, for coding-related work, as it's the only one that's actually any use.  GPT4o isn't terrible, but given that it has a knowledge cutoff in 2023, it has absolutely no idea about anything more recent.


This is the output I got using Claude (albeit, with a tuned system prompt for coding):

Yes, the Raspberry Pi Pico W's onboard LED (GPIO 25/"LED" pin) does support PWM! I'll show you both hardware PWM and software PWM approaches.

1. Hardware PWM Solution: Here's a complete example that fades the LED up and down using hardware PWM:

from machine import PWM, Pin
import time

# Setup the onboard LED with PWM
led_pwm = PWM(Pin("LED"))  # GPIO 25
led_pwm.freq(1000)  # Set PWM frequency to 1000Hz

def pulse_led():
    while True:
        # Fade in
        for duty in range(0, 65535, 1000):
            led_pwm.duty_u16(duty)
            time.sleep(0.01)
           
        # Fade out   
        for duty in range(65535, 0, -1000):
            led_pwm.duty_u16(duty)
            time.sleep(0.01)

try:
    pulse_led()
except:
    # Clean up PWM on exit
    led_pwm.deinit()

2. Software PWM Solution (if hardware PWM isn't working): This simulates PWM using software timing:

from machine import Pin
import time

led = Pin("LED", Pin.OUT)

def set_brightness(brightness):
    # brightness should be between 0 and 100
    cycle_time = 0.001  # 1ms cycle
    on_time = (brightness / 100.0) * cycle_time
    off_time = cycle_time - on_time
   
    led.value(1)
    time.sleep(on_time)
    led.value(0)
    time.sleep(off_time)

def pulse_led():
    while True:
        # Fade in
        for brightness in range(0, 101, 5):
            for _ in range(10):  # Multiple cycles per brightness level
                set_brightness(brightness)
       
        # Fade out
        for brightness in range(100, -1, -5):
            for _ in range(10):  # Multiple cycles per brightness level
                set_brightness(brightness)

try:
    pulse_led()
except:
    led.value(0)  # Turn off LED on exit

Key differences between the approaches:

Hardware PWM:
  • More efficient
  • Smoother fading
  • No CPU overhead
  • Uses dedicated hardware

Software PWM:
  • More CPU intensive
  • May show slight flickering
  • Works on any GPIO pin
  • Useful when hardware PWM isn't available


To run either example:

mpremote cp main.py :main.py

mpremote reset

The LED should start fading in and out in a continuous loop. The hardware PWM version will give you the smoothest results, but both approaches work to demonstrate LED brightness control.

15
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: October 24, 2024, 04:35 PM »
For API Usage:
Mistral have an extremely generous free tier (1 billion tokens per month, 500,000/minute)
Gemini is free with a huge context window (2 million on pro, 1 million on flash models)
Groq (not Grok) is also free (but has heavy usage limits that might not suit everybody, but has EXTREMELY fast output speed)

Also try OpenWebUI, as it's capable of "plugging in" any model from the above list.  Another really good web-ui with a lot of extra features, that can also run locally, is LobeChat (They also have a hosted version)


I don't wanna shamelessly promote the company I work for (We have a very generous free tier for our VSCode/JetBrains/WebUI, AI coding assistant) so if you wanna know about that, drop me a DM here or discord and I'll let ya know (and can hook you up with a month of free Pro tier, too).

16
Official Announcements / Re: New server/OS update
« on: August 06, 2024, 05:44 PM »
ok so, based on those examples, that's 4 outages since 2019, totalling 246 minutes ( a little over 4 hours).

I'm by no means defending those outages, but it's a little hard not to find the following sentence a little dramatic "Clownflare tends to be the reason for sites to be completely unavailable", when it was the reason for only slightly over 4 hours, in 5 years.  :-*

Anyway, this has absolutely nothing to with the thread.  The issue was a traffic attack, which Cloudflare mitigated within minutes of turning on Attack Mode, so regardless of your feelings on it, the service provided the desired outcome for what DC needed. /shrug

17
Official Announcements / Re: New server/OS update
« on: August 05, 2024, 02:41 PM »
Clownflare tends to be the reason for sites to be completely unavailable:
https://blog.cloudflare.com/tag/outage

But you do you, I guess.

Only a single one of those posts has anything to do with Cloudflare being the issue.  The rest are just informative posts about widespread internet outages across the world.

18
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 03, 2024, 08:44 PM »
Well, if you want any suggestions for good tools, let me know as I have a list! (Unbiased and based off of personal experience with the tools in question) XD
-KynloStephen66515 (August 03, 2024, 06:09 AM)

I'm interested in seeing a list, because my experience has left me with the impression that AI models aren't yet fully baked, and rarely give me truly useful results.

Are you interested in IDE based plugins (think GitHub Copilot), or standalone/web-verions (ChatGPT, etc)?

20
Living Room / Re: Is mouser on vacation ?
« on: August 03, 2024, 12:26 PM »
He's alive on discord!  :D

21
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 03, 2024, 06:09 AM »
Which assistants have you tried
-KynloStephen66515 (August 02, 2024, 04:50 PM)
None, opinion based on the crappy 'examples' / 'answers' that ppl posted in (another) forum.

Basically, I'm sitting out the storm, to see what eventually rolls out without the world crumbling to pieces. (Must be related to my age/experience and somewhat conservative approach to 'the latest hype') :)

Well, if you want any suggestions for good tools, let me know as I have a list! (Unbiased and based off of personal experience with the tools in question) XD

22
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 03, 2024, 06:07 AM »
Which assistants have you tried, because I personally know quite a few (and even work for one) that are actually incredible (especially for autocomplete, but also for quickly getting code snippets, answers, bug fixes, code smells, etc)
-KynloStephen66515 (August 02, 2024, 04:50 PM)

Not assistants per se, but I have been using a tool: 'LM Studio' to run 8 LLM's locally. This tool provides an easy way to download LLMs, use one or more of those in the provided chat screen and allows you to run one or more models (at the same time) as a server, which you can access via an API that uses the same form as the OpenAI API.

Right now I'm most impressed with model 'bartowski\StableAI Instruct 3B'. It doesn't take up that much RAM and responds surprisingly well in CPU-only mode, even on a i3 10100F CPU. You can also set it to use the available GPU (NVidia/AMD) if that has enough memory to offload one or more models into. And it allows you to play with quite some model-specific settings for the LLM's you load into memory. LM Studio is freeware.

Sometimes I verify the results bij filling in the exact same prompt into ChatGPT (v3.5, I think that is the free one) and the locally running StableAI model. ChatGPT answers show up faster and usually have a lot more words to convey the same message.

Basic script generation works quite well in both, but ChatGPT can deal with a bit more complexity. Still, for my purposes, the StableAI model hasn't been too far off ChatGPT or too slow in comparison.

The thing I am looking for is a relative easy way to train the StableAI model I have with company-specific documentation, our script language and documentation portals. For that purpose, the open source tool 'LlamaIndex' appears to be very interesting.

Once I can train the LLM I have, turning my local AI instance into a proper personal AI Assistant shouldn't be too much of a problem.

You might be better suited with RAG than a fine tuned model (much quicker to set-up, and vastly easier to keep up to date with ever changing information)

23
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 02, 2024, 04:50 PM »
Do you mean the quality of the code? The LLMs themselves? or am I missing something
-KynloStephen66515 (August 02, 2024, 02:45 PM)

Any example I've seen so far seems to be a garbage answer, unusable or totally unfit to what it's supposed to be tailored to. I'll wait and see what's left after the dust of this hype settles :huh:

Which assistants have you tried, because I personally know quite a few (and even work for one) that are actually incredible (especially for autocomplete, but also for quickly getting code snippets, answers, bug fixes, code smells, etc)

24
Living Room / Re: AI Coding Assistants (Who uses them and which)
« on: August 02, 2024, 02:45 PM »

Simple: Nope. To me that's garbage in, garbage out.

Not sure what you mean here. 

Do you mean the quality of the code? The LLMs themselves? or am I missing something

25
Living Room / AI Coding Assistants (Who uses them and which)
« on: August 02, 2024, 02:06 PM »
Hey guys,

I'm curious to see who on the DC forum uses AI coding assistants in their IDEs or CLIs to enhance their daily coding workflow.

If you do use them, which one did you pick, and why?!

Pages: [1] 2 3 4 5 6 ... 145next