ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

Have AV False Positives Improved? (Systemus 16 months later)


First, a quick thanks to BGM, Mouser, and many other authors who have created useful, or at least interesting, tools for us.

Systemus is a handy dandy system admin tool that BGM created for N.A.N.Y. 2020.  Systemus had problems with false positives from antivirus products.  Just out of curiosity, I ran Systemus through three meta-scanners, Jotti, VirusTotal, and MetaDefender, two days ago just to see what had changed.  The results did improve (fewer presumably FPs), but are far from perfect.

At the end of 2020, I submitted Systemus to a half-dozen-ish vendors, plus a few more later on.  Over the intervening couple of weeks, VirusTotal varied from 23 FPs initially, to 15 minimum, to 17.  BGM reported 23/69 for VT in his response.  I'm not sure why we had different totals, but I can imagine several possibilities (e.g., maybe BGM ran his PE Studio analysis a couple of weeks before his reply in the thread; or maybe he used .exe and I noted .zip, primarily); etc.).  FPs also decreased on Jotti (no details) and MetaDefender (8 to 5) during those few weeks.  Some other person also may have submitted FP requests during that few weeks or the intervening months.  (See end of for my comment and BGM's response.)

As several developers noted in that Systemus thread (and many other threads), this can be frustrating: each antivirus false positive is going to discourage users from experimenting with the software, but getting each AV vendor to evaluate each iteration (or even periodic stable releases) is like playing whack-a-mole at the bottom of the deep end of the pool using your nondominant foot with one eye closed and with some moles stuck in the up position. (I'm look at you, Webroot, who didn't actually evaluate/act on Systemus even after I submitted it twice; and you, McAfee in several guises, who has lots of fiddly restrictions; and a few others who make submitting arduous -- only webform with odd fields, only via forum, only via installed AV(!), only by author (not a user), etc. -- or even impossible as far as I can tell).  A couple of reputation-based products aren't (or weren't a couple of years ago) willing to whitelist little known software even after their lab had reversed a FP in the main AV.  Some authors recommend other authors just ignore FPs as too much trouble.

For comparison, these are April 2022 results for Systemus using the same files from my download folder.  The dates here identify the last update to the engine/signatures.

.zip (1/14):
Fortinet Apr 8, 2022 W32/PossibleThreat

.zip (6/59 -- 59 excludes incompatible or nonreporting AVs):
AhnLab-V3    Malware/Win32.Generic.C3986407
Fortinet         W32/PossibleThreat
MaxSecure     Trojan.Malware.300983.susgen
McAfee          Artemis!A5AC6681733F
McAfee-GW-Edition  Artemis!Trojan
Panda           Trj/CI.A
.exe (run today -- 8/68 -- excludes 4 incompatible and 1 nonreporting engine)
Same FPs as above except:
(different) McAfee-GW-Edition  BehavesLike.Win32.Dropper.dh
(added) Palo Alto Networks
(added) Webroot  W32.Malware.Gen

.zip (0/35 but infected components flagged -- 35 includes 2 incompatible file type and 1 no result)
.exe (2/35 -- includes 1 no result)
Malware/Win32.Generic   AhnLab   Apr 9, 2022
Malware   Webroot SMD   Apr 8, 202
.dll (1/35 -- includes 1 no result)
Malware   Webroot SMD   Apr 8, 2022

I have no idea how much of the difference from two years ago may be due to FPs being fixed (either proactively or because someone submitted a request), or due to improved engines, or due to pruning because the miniscule installed base is no longer considered relevant (perhaps partially due to user whitelisting).

Note that due to different settings or other differences, "AhnLab" and "McAfee" reported "No Threat Detected" on MetaDefender while alerting on VT.  This is not the first time I've noted a few differences between results for the same engine between these (and other) meta-scanning platforms. VT has a statement somewhere that mentions various reasons VT results may vary from the same vendor's installed product and/or web scanner.  Some of these reasons (plus others) also apply between meta-AV platforms.

My personal impression is that, for the same vendor, VT tends to have more hits (almost always FPs for me) than MetaDefender and Jotti, but I have seen the reverse.

A couple of tools I used to automatically submit likely FPs to selected multiple sites via email were discontinued many years ago.  I saw a few attempts to create something similar, but those weren't maintained (but I haven't looked recently).  TechSupportAlert had a great list but it is not as useful anymore -- especially for FPs.  MetaDefender's knowledge base has an updated list for its vendors (usually email, sometimes web or other) but doesn't explain the restrictions (i.e., address but not content or formatting restrictions).  If jotti or VT has a similar list, I've missed it.  VT really needs a list as some of their vendors/engines are obscure to this English speaker who only dabbles in security occasionally.

As one of the devs here who has gone through many rounds of false positive hell, I gave up caring about it many many years ago.  If my body of work isn't enough to convince a potential user to use my software, there isn't much else I can offer.  Call it apathy, or whatever, I simply don't care and don't spend a single cycle worrying about it.

To give you an idea of how ridiculous false positives can be, I once wrote a single line program consisting of one hotkey that simply EXITED THE PROGRAM.  Nothing else.  Even that didn't come up clean in the virus scans.  As you might imagine, it was about that point that I stopped caring.   ;D

To give you an idea of how ridiculous false positives can be, I once wrote a single line program consisting of one hotkey that simply EXITED THE PROGRAM.  Nothing else.  Even that didn't come up clean in the virus scans.  As you might imagine, it was about that point that I stopped caring.
-skwire (April 12, 2022, 12:01 AM)
--- End quote ---

It occurs to me that the entire approach to "software safety checking" or whatever is entirely misguided.  Instead of "scanning" a program and guessing that it might have some code that looks similar to another program that someone said had some bad code, it may be a a better approach for some "authoritative" site to actually run it on a virtual machine to see what functions it hijacks.  If it only grabs keyboard and it is a hotkey program then perhaps nobody should panic.  But if it hijacks everything in sight then maybe it should be noted on the site.  I don't see how a quick scan that generates some kind of CRC result can tell you if the intent of the application is malicious.  It is just a quick and leaky prophylactic being sold to the public as far as I can tell.   :Thmbsup:

Add this to the ridiculous false positive list...As a test I created a new windows app in C++ Builder 10.X with absolutely none of my own code just the base project starting code and 2 or 3 claimed false positive about a year ago.

I get mails about false positives from AVG/Avast for some of my TrID's filetypes definitions... which are plain XML files!  :-\


[0] Message Index

Go to full version