topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Monday February 17, 2025, 2:42 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Nod5 [ switch to compact view ]

Pages: prev1 ... 39 40 41 42 43 [44] 45 46 47next
1076
General Software Discussion / Re: MakeUseOf.com - Nice Blog
« on: October 25, 2007, 08:47 AM »
Indeed a very good site. I like the fact that the posts often have a single theme (15 best of this, 10 coolest of that...). That said, I think a "best of the week" post (and separate RSS feed) like the one boingboing, lifehacker and some other sites have would be a good things since the posting frequency is quite high.

1077
General Software Discussion / Re: Ubuntu 7.10 Released today
« on: October 23, 2007, 12:35 PM »
I've played around with this for a few days now. It's an incremental improvement over the last version, as usual. I love that they've included the compiz visual effects by default now. I hadn't gotten around to try them before but now I can't get enough of them. I keep switching workspaces for the fun of it!

Come to think of it, I have a hard time distinguishing what are genuine improvements from the last release and what are perceived improvements due to me getting to know the OS better. For instance, I just learned how to make a hotkey to empty the trash can, just like the autohotkey I have for that on XP. But the command line argument I used for that seems to have been available for a very long time - I just discovered it now.

Maybe this is a second good reason for having the biannual Ubuntu release schedule (the first reason being getting objective improvements at a steady pace): it gives those of us who mostly keep trying out and testdriving Ubuntu on secondary machines a way to get used to it in a more engaged manner. For by reinstalling and then out of curiousity looking around for, and reading up on, new features we tend to learn a lot about old features too.

1078
sorry, I posted a message here that was meant for this thread: https://www.donation...?topic=10452.new#new
I removed it here now. Mouser: When first browsing different pages on donationcoder forum in Maxthon and then logging in to post in one of them then after passing the authentication I think donationcoder sometimes redirects me back not to the thread I started out in but instead to one of the threads already active in another tab. Could that be due to some error in the donationcoder forum code?

Since I posted in this thread I might as well say something on topic also: I think a firefox plugin mirroring the functionality of the Maxthon 2 feature Text Filter ( http://forum.maxthon....php?showtopic=62875 ) would be very useful. But maybe it's too complex a project or not doable as a plugin. I'm not yet capable of coding any plugin myself so I'm just voicing an idea here.

1079
I have for some time used Yahoo Pipes http://pipes.yahoo.com/pipes/ to tweak some RSS feeds the way I want them. Now I want something similar for my email. Specifically, I subscribe to a few email list that once a day sends me an email that contains all posts to the list from that day. That email is poorly designed. It begins with a lot of information on how to exit the list so I (and everyone else) have to scroll down two pages to see the index of all included emails. I want to remove that initial text. Since it's always the same it would be easy to do in RegExp or even with some simpler find/replace tool.

So I need an extension for Thunderbird och Outlook Express that lets me automatically modify text for all incoming messages that fit some filter (sender email adress in my case). Has anyone seen such a thing?

I've quickly searched https://addons.mozil...g/en-US/thunderbird/ but didn't find a match. The closest thing I got was a Thunderbird extension called NestedQuote Remover, http://email.about.c.../nestedquote_rem.htm : "NestedQuote Remover swiftly (automatically even if you desire so) eliminates stale quoted text from replies to replies and leaves only the most recently quoted email in place."

1080
These newsletters are great! Thank you Darwin!  :Thmbsup:

1081
Mini-Reviews by Members / Re: List of disc catalogers
« on: October 10, 2007, 02:57 PM »
I'm looking for a disc cataloger that works on both Windows and Linux (Ubuntu).

I found some leads on Ubuntu catalogers in this thread: http://ubuntuforums....806&postcount=12 . One promising cross-OS cataloger mentioned there is VVV, Virtual Volumes View, http://vvvapp.sourceforge.net/ , currently at version 0.7.
vvv-physical-windows[1].gifvvv-virtual-linux[1].gif
Drawbacks: the features are still very basic. No reports, no specific file icons, no copying of file names and several other no's. But I like it so far as it currently goes. It's slim and fast and is actively developed (in comparison, some other catalogers for Ubuntu that haven't been updated for years). Does anyone know of a good cross-OS cataloger with more features than VVV?

Tomos, here are details for VVV for the big list on the first page of this thread:
Cataloger    Homepage    Free-/Shareware/Price       Last Version / Change
VVV (Virtual Volumes View)   http://vvvapp.sourceforge.net/   FOSS   v0.7 2007-09-24

1082
Thanks. When I have time to try this I'll probably make a duplicate of the music folder but with only one file per album subfolder and then let Itunes do its thing on that.

1083
Urlwolf, this looks interesting! Do you have time to explain a bit more? What do I need to install to get this to work? Do I need Itunes and then import all music into it before I can download the cover images? If so, then is there risk that installing Itunes will screw up the tags in my mp3 files? Is the process completely automatic once it is set up properly?

1084
Finished Programs / Re: SOLVED: Screen rotation -> possible in AHK ?
« on: September 17, 2007, 03:39 PM »
ak_,
if you have an ATI graphics card then the utility ATI Tray Tool (ATT) might work: http://www.guru3d.co...le/atitraytools/189/ .
It is free, supports screen rotation and can be set to use any hotkey you want.
att.png
And if you don't want it running all the time you could make an AHK script that, when a hotkey is pressed, launches ATT, sends the screen rotation hotkey, exits (and on next hotkey press toggles back). You could also easily add a step where some additional tool saves/restores the desktop icon positions (ATT can save/restore icon position but if I remember correctly it's not possible to do that through a hotkey)
And if you don't have an ATI card, you can forget all this. ;D

1085
ethan, jibz, thanks very much for the replies and examples and links!

ethan:
I guess the problem would be to find the function.
Below is a program that encodes a number using power tables.
For some data value, you will find that in order to represent it, you would need more storage than its original form.
Ok, now I more clearly see the complications. Still, let me try resisting some more: if the representation only becomes longer in SOME cases, then it means that this would be shorter, maybe even drastically shorter, in some cases? If there was some chance of getting for example a 1 GB file compressed to a file 5 MB or smaller in size, then wouldn't it in many cases be worth to spend the time and CPU effort required to test if this type of compression works?  :tellme:

Also, couldn't a compression tool be made to loop over a number of such programs/algorithms large enough to guarantee that a drastically compressed output would be had through trial and error for any input file? Or would the problem then be that a completely unpractical amount of time time/CPU power is needed?

jibz:
Let's look at why no algorithm you may device will be able to compress any large amount of data into a short formula.

You want your decompression function to be a bijection, by which I mean it should be possible to decompress into any original data, and given compressed data there should be only one possible decompressed value.

The short formula your algorithm produces will be represented in the computer in a number of bytes. Since these bytes only give you a finite number of possible values, which is small compared to the vast amount of possible values of the original large data, the decompression function cannot possibly be bijective.
Ok, I think I only understand the general idea in the linked documents. Not the detailed math on the bijection wikipedia page. And I don't understand the proof on the other wikipedia page - "Lossless data compression must always make some files longer" - but I understand by the tone of it that they firmly see it as impossible. I'll try parsing through it again later tonight though.

But still trying to resist, one objection I think of is this: our algorithm need not be able to compress ANY large amount of data. Since I imagined taking the raw binary string as input, this would always be a long string of only 1s and 0s, say 1 GB in size. By then treating that as a numeral, wouldn't we get a significantly more limited range of inputs that the algorithm must be able to find a shorter representation for? (all numbers of lenght N containing only 1s and 0s instead of all N length numbers)

And in contrast, the representation can be a complex function containing the numerals 0-9 and all the rich components from the "math toolbox". (That representation must then in turn be constituted on the disk as a string of 1s and 0s, but if the representation on the higher level only takes some lines of text (including special math characters) then that's only some KB or maybe MB in size on the disk - still an extreme compression)

Also, the compression utility containing the algorithm could be optimized to only work on input of a set lenght, like 1MB or 1GB. For example, it could as a preliminary step always store the initial input file in 1MB chunks, like a multiple part zip archive. That limits the number of possible inputs some more, right?

Also, the idea above in response to what ethan wrote could be added: looping over multiple algorithms in a trial and error kind of way until one that gives good compression for the specific input is found. Since the size of the compression utility wouldn't be an issue it could be several MB in size and contain VERY many algorithms (which could together be seen as one big algorith I guess)

Wouldn't these things (that I'm intuitively trying to pull out of the hat here) make a big difference? Or have they already been taken into account somehow in the negative answers given in the links above?

Finally, I notice I'm posting many new questions here. There's risk that I'll suck you guys into a long discussion with more and more skeptical questions like these that perhaps no one else but me will get something out of ;D  So I just want to say: feel free to opt out at any time. That said, I appreciate all answers I get of course.

1086
ethan, mouser: thank you both for the feedback.
I urge you to delve more into compression, it really is a beautiful subject.
I'm trying! ;D

If I understand you correct, the answer is, it can.

There is nothing to stop you from compressing at the bit level, with things like strings of bits, but the question is would it give you higher compression than working at the higher levels, and the answer is almost never, because the chunks of redundant information (the patterns) are occuring at a higher level (like repeated letter pairs in english, or byte-patterns representing colors, etc.).  It's also the fact that working at the bit level is more time consuming for the cpu as opposed to working with bytes or words.

Hm, I fail to understand how any of the two articles ethan linked to, and what mouser says about chunks/patterns, exemplifies the kind of compression I was imagining. :( Either I missed something in the text / your replies or I was confused in my description of the kind of compression my question concerned. On the latter, I previously explicitly distinguished between "higher level"/"lower level" but I now think I really meant more. Maybe this is a clearer way to put it: the compression I was imagining would occur on a certain LEVEL and with a certain APPROACH: (i) at the lower level (i.e. ones and zeroes) AND (ii) working on the binary string as a whole.

Let me try to clarify what I mean with (ii): I (think I) see the difference in Huffman works from the the run-length encoding example i quoted above. Huffman works on the lower level. But they still seem to me similar in the approach: they find a "translation table" for chunks of the original string such that the table and the new string (after translation) together is shorter than the original string. That's different from the compression I was imagining I think. Maybe I can put it like this: We want to compress a very large text file. We do Huffman compression. Whatever the optimal output Hoffman gives us, it will be constituted by a string of binary states on the harddrive: 011000 ... 0010101. The compression I'm imagining would at that point jump in, see the entire string as one "mathematical entity" and try to find a much shorter mathematical representation for it. But that compression would NOT look for parts of the string that are repeated (like "10000001" in "100000011000000110000001"). Instead, it should find a shorter mathematical way to represent the same, entire string. An example of what I'm imagining but with numerals: the entire number "28421709430404007434844970703135" can have the shorter representation: "(5 to the power 45)+10". Why can't the string of binary states seen as one mathematical entity be shortened like that? Would it be impossible for some reason? Or perhaps just not practical due to larger CPU and time requirements? Or is there some other reason? (Or is the question, still, confused?)  :tellme:

1087
Hi Mouser,
Yes, I understand this kind of question is hard to answer in forum posts like these, especially to someone like me who doesn't even know the basics. I've previously tried browsing through some online guides to compression aimed at laymen like me, but the ones I've seen either don't seem to touch on my question or are to complex for me to master. But maybe you or someone else can help me out with some keywords relevant to the "type" of compression that I'm curious about? Because even while it is not possible I suspect there still might be a name for it, X, and texts explaining why X is not possible.

More specifically, it seems to me that most popular descriptions on how compression works that I find seem to focus on strings of characters, like this for example:
Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, relatively simple graphic images such as icons, line drawings, and animations.

For example, consider a screen containing plain black text on a solid white background. There will be many long runs of white pixels in the blank space, and many short runs of black pixels within the text. Let us take a hypothetical single scan line, with B representing a black pixel and W representing white:

WWWWWWWWWWWWBWWWWWWWWWWWWBBBWWWWWWWWWWWWWWWWWWWWWWWWBWWWWWWWWWWWWWW
If we apply the run-length encoding (RLE) data compression algorithm to the above hypothetical scan line, we get the following:

12WB12W3B24WB14W
Interpret this as twelve W's, one B, twelve W's, three B's, etc.
http://en.wikipedia..../Run-length_encoding

That I grasp. But examples like the one above only focus on compression on the "higher level" of strings of characters ( http://computer.hows...ile-compression2.htm has similar examples). They don't say anything about applying compression to the "lower level" binary structure on the hard drive etc that (somehow) constitutes the characters.

But my question was on applying compression to that underlying binary structure. Let me try to flesh out the imagined compression a bit more: imagine that the long string WWWW ... WWW above is constituted by the binary states 10111101100100011 ... 1001011 on the harddrive. After the "higher level" "Run-length encoding" compression above, the short string 12W ... 14W above is constituted by the shorter string of binary states 1001 ... 11001 . My question was why compression couldn't work directly on that string, seeing it as a long number 1001 ... 11001 and find a much shorter mathematical representation, M, for it. M is represented on the "higher level" in a few rows of characters, which in turn is saved as the compressed file, constituted by a much, much shorter string of binary states 101 ... 011 on the hard disk.

1088
Ok, this is one of those posts where you ask a question that you have a firm hunch is silly or even stupid and will have a simple answer that you should probably already know but don't. I guess most people can feel a hesitation to ask such questions since they then risk looking stupid. Well, the alternative is often worse: to remain stupid. So I'll take that risk... :-[ The question concerns how data compression and data storage works.

I think I understand the basic idea to compression, as described here http://en.wikipedia....iki/Data_compression :
In computer science and information theory, data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use through use of specific encoding schemes. For example, this article could be encoded with fewer bits if one were to accept the convention that the word "compression" be encoded as "comp."

And the basic idea of data storage, as described here http://en.wikipedia....ard_drive#Technology :
The magnetic surface of each platter is divided into many small sub-micrometre-sized magnetic regions, each of which is used to encode a single binary unit of information. In today's HDDs each of these magnetic regions is composed of a few hundred magnetic grains.

(So the hard disk is basically "a long string of ones and zeroes")

Ok, now my question:
Why can't compression of a file work by seeing the long string of "ones and zeroes" on the hard drive that constitutes the file (presupposing the file is not fragmented) as a very long number and then find some much, much shorter way to mathematically represent that same number. To later decompress the file, the mathematical shorthand is expanded to a long binary string again and that in turn gets written (in unfragmented form) to the hard drive. And so the file is back. I imagine that if that was possible then a file with many MB in size could be compressed to a mathematical expression a few lines of text long.

I'm asking why this is NOT possible because I have a strong hunch that if it was possible then it would already be done all the time. But I'd just like to get a general grasp of why it is not possible. So help me out here. :tellme:

1089
A small idea, similar to some previous ones but still not identical I think:

Pressing Ctrl + C currently copies the path to the selected item in the results list.

Enhance that by letting Ctrl + C do different actions based on the number of times it has been pressed (for the very same item):
1st time - path to clipboard
2nd time - filename to clipboard
3rd time - file to clipboard
4th time - path to clipboard
...
For each such keypress, indicate the action at the far en of the statusbar: "path", "name", "file".

1090
General Software Discussion / Re: SubmitToTab - a Firefox add-on
« on: August 28, 2007, 03:36 AM »
Nice feature... One use for this would by for making forum posts / forum registrations for forums when you're unsure how stable the forum functionality is. I bet most of us have experienced this: type a long post to some forum / forum registration page and press post only to get a error page or a blank page or a you-didnt-fill-in-the-important-fields-AAA-BBB . And so the form is not submitted. And pressing back in the browser gets you back but all you'd entered is gone.


Let me also back up Darwin a bit... These features makes me stick to Maxthon over Firefox:

1. groups (that is, saving multiple tabs to a .cgp file. Like a .url favorite but with multiple URLs. Very hand, easy to backup and manage.)
2. external utilities (a toolbar where I can add any external tools started through the command line AHK scripts, other applications and so on -- can be started with some maxthon variables as command line parameters like current page URL and title)
3. alt + Z (open last closed tab. I know I can right click tab in FF and do that but I want that hotkey)
4. automatically adding more tab rows when a lot of tabs are opened
5. Powerful Text Filter functionality (that is allowing you to regexp replace content in any page. It's sort of like the 3rd party tool proxomitron but built in; got added in Maxthon2. Very powerful and customizable)

1091
icekin,
great links to the intermediate services!

Mouser,
You've probably already considered it since it's been mentioned on DonationCoder before but: Yahoo Pipes accept a lot of feeds as input and manipulate them in powerful ways including complex, regexp driven filtering and then output to some other RSS reader. A drawback: it only pulls and filters the input feeds when you pull the output feeds. So if you don't check it regularly you will still miss matches since they have been replaced by newer items in the input feeds. Automatic checking and storing IS possible but it seems to take a lot of tweaking like creating loops to external sites like feedburner -- there are some threads in the yahoo pipes forum if anyone want to try that.

icekin or anyone else,
can any of the intermediate services listed above automatically check and store matches in a way that is easy to setup?

1092
ham,
A question: is there some quick way to generate a string with {ALL} (or any other mask syntax) that is exactly as long as the user chooses? For example, typing "datagen {ALL}*22" to generate a string with 22 characters. I know I can type "mkpass" and get some preset password lengths/types but I'm asking about choosing any lenght.

1093
But when you use CAPS as a hotkey combo in AHK then it isn't (by default) supposed to change the usual toggle mode of the CAPS button I think. So whatever you had it set to before stays set afterwards. And that's how it works when I try it at least. You can put ~ before a hotkey to force AHK to not block the native function though

1094
Yes, I have to admit it's pretty dodgy after testing it again here.  ;D The right click context menu keeps popping up now and then even though it isn't supposed to. Not sure why. The script only took a few (fun) minutes to piece together from the Easy Window Dragging code so. I actually just recently discovered the Easy Window Dragging script and liked it a lot and had thought about expanding it somehow. So this was good practice. Other ways to expand it that I've since found really useful are: CAPS + middle mouse ---> minimize the window under the mouse pointer , CAPS + shift + move mouse = resize window under mouse pointer.

1095
Here's an AHK script that gives basic, vertical hand tool functionality when holding CAPS + right mouse button + dragging. It should work for all programs that use mousewheel up/down to scroll up/down. Change the sensitivity through the xsens variable in the code if needed. It's very limited though: no horizontal scrolling, no drag hand icon. So if you already have a mouse with a scrollwheel then it's not so very useful. Still it lets you do long scrolls a bit faster and with more "flow" I think.

By the way, Easy Window Dragging (that I built this upon) is extremely useful - try it anyone who hasn't yet.


; Easy Window Scrolling
; adapted from Easy Window Dragging
; http://www.autohotkey.com/docs/scripts/EasyWindowDrag.htm

CapsLock & RButton::
CoordMode, Mouse  ; Switch to screen/absolute coordinates.
MouseGetPos, EWD_MouseStartX, EWD_MouseStartY, EWD_MouseWin
WinGetPos, EWD_OriginalPosX, EWD_OriginalPosY,,, ahk_id %EWD_MouseWin%

SetTimer, EWD_WatchMouse, 10 ; Track the mouse as the user drags it.

xsens = 20     ;<------- change sensitivity here if needed
return

EWD_WatchMouse:
GetKeyState, EWD_RButtonState, RButton, P
if EWD_RButtonState = U  ; Button has been released, so drag is complete.
{
    SetTimer, EWD_WatchMouse, off
    return
}

; Otherwise, scroll to match the change in mouse coordinates
; caused by the user having dragged the mouse:
CoordMode, Mouse
MouseGetPos, EWD_MouseX, EWD_MouseY
ychange := EWD_MouseStartY - EWD_MouseY
SetWinDelay, -1   ; Makes the below move faster/smoother.
if ychange > %xsens%
 SendInput {WheelDown}
if ychange < -%xsens%
 SendInput {WheelUp}


EWD_MouseStartX := EWD_MouseX  ; Update for the next timer-call to this subroutine.
EWD_MouseStartY := EWD_MouseY
return

1096
great script lanux128!
one idea: maybe add a button to automatically open a new email message window with the entered text as message body? It could use the same mailto: solution that FARR uses perhaps.

1097
This was unpleasant news indeed!

They were saying it was very bad to sleep in the same room as a laser printer -
I've been doing that for years...   :o

..which made me wonder are they even dodgy if you dont use them much  :tellme:
Good question. I suspect the particles are released when the printer is used. But how long they then stay in the air in the room is another question.

Another thing I'm curious about: does the type and brand of toner makes a noticeable difference here (for the same printer)? Cheap, alternative toners from 3rd party no-name manufacturers are pretty common. When it comes to ink catridges I usually go with the 3rd party brand for my personal usage. It's usually about half the cost and I have a hard time finding any real differences in printing quality. I've had pretty much the same approach when it comes to laser toner but but this new particle risk scenario may change my mind about that...

1098
mouser, just an addition: I later realized that that brief black background I wrote about also appears every time the window gets auto-shrunk (or de-shrunk) given these settings:
- checked: options > auto-shrink window to fit results
- set to some degree: options > window transparency

1099
IconBoy: since you talked about OutlookExpress I assume that's what you'd like to se a character count for in this thread also. I tried making a small AHK script that grabs the text from the new message window but I didn't get the preferred method to work (that is, this did not work: ControlGetText, some_output_variable_name, Internet Explorer_Server1, ahk_class ATH_Note ). I think it has to do with the type of control that the new message window is. Without finding a way to work on that control directly I can only think of quite messy ways to count the characters through AHK. For example: send ctrl+A (select all) and ctrl+C (copy) to the text, count the number of copied characters and tell you the result. Like this, assuming that the email input box in the new message window has focus:

F7::
xvar2=
TrayTip
sendinput ^a
sleep 100
sendinput ^c
StringLen, xvar2, clipboard
TrayTip,, string length = %xvar2%,2
sendinput {end}
return

Drawbacks with that: 1. requires manual input (pressing F7 above, or some other hotkey) since letting the script autorun repeatedly would prevent the user from adding text due to the select all step (ctrl+a), 2. the text selection is noticeable and so visually disturbing and 3. the current input prompt position is lost due to the select all step.

Still, that kind of character counting on hotkey press might be enough in some situations perhaps?

1100
I've noticed a small GUI-related thing...

If I set FARR v2.00.135 to
- checked: options > auto-shrink window to fit results
- unchecked: options > alpha fade into view
- set to some degree: options > window transparency
(... perhaps some other setting - i've played around with many of them)
I then get a very brief but clearly noticeable black colored FARR results window when launching FARR. It's as if the whole results part of the window first gets painted black in a flash and then quickly re-painted like it should be. I hope that description makes sense - I can't think of a better way to put it.

Changing any of the three settings above removes the brief "black flash"

Pages: prev1 ... 39 40 41 42 43 [44] 45 46 47next