topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • June 16, 2019, 09:52 AM
  • Proudly celebrating 13 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Gothi[c] [ switch to compact view ]

Pages: prev1 ... 27 28 29 30 31 [32]
776
Mini-Reviews by Members / Re: Distributed compiling and clustering
« on: April 05, 2006, 02:58 AM »
That's cool, gjehle! I wish I had a 64b cpu to fool around with.

I actually got around to compiling code::blocks on linux, seems to be stable enough sofar (unlike what i've read in reviews of the linux version on past versions), though time will tell. - but- here's the twist:
It was -very- easy to make code::blocks use distcc!

Just replace all the g++ / gcc with distcc in Settings->Compiler.

Everything still works normally, you still get all warnings in the log etc, just get a majour speed boost :)

Here's a screenshot of my 3ghz box doing the work for my slow 1ghz Dell disaster computer.


777
Well, the first screenshot shows storm areas in blue in xastir,
The second one shows temperatures on a map,
third one is aprsdos tracking a hurricane (xastir and ui-view can do this too, but xastir is best at it)
fourth one is a weather stations details in ui-view showing windspeed, direction, etc etc

778
To get the most extreme crazy awesome weather info, you guys should play with APRS.

APRS (Automatic Position Reporting System) is a technology that takes gps information, along with a whole bunch of other information (weather info, icon, messages,...) and transmits it over an AX.25 protocol (radio amateur version of the X.25 network  protocol) which can be over radio, or the internet. The latter is very interesting, even for non-radio amateurs, as there is a very big network of servers all over the world where radio amateurs and official services broadcast their weather information, amongst other things. This is interesting because first of all, the clients aviable for this protocol are -VERY- advanced. They can download  satelite imagery from the net, live weather radar overlays, weather warning zones can also be an additional layer, they can work with tigermaps or other detailed map data, etc etc,... the posibilities are endless, and most clients allow you to write your own plugins for them. Secondly, you have access to a very massive amount of observations from all around the world. I'm not gonna get into the details as they are a bit complex, but I searched togeather some screenshots from google for you guys.

also check out http://en.wikipedia.org/wiki/APRS

best aprs software :

for newbies :  ui-view has a very user friendly interface and is a very powerful program. http://www.ui-view.org/#downloads

if you want it all : xastir is a cross-platform aprs suite that does about everything. http://www.xastir.org/

aprs internet server list : http://www.aprs-is.net/APRSServers.htm



Here are the screenshots:





779
Mini-Reviews by Members / Re: Distributed compiling and clustering
« on: March 29, 2006, 11:31 AM »
Well, consider the scenario where you have a 1-ghz machine and a 2-ghz machine with distcc. The 2ghz machine compiles "small.c" while 1ghz compiles "templateheavy.cpp", and then the 2ghz has to wait for small.obj before it can link. This is of course an oversimplification, but the idea is that there's some "sync points" in makefiles/builds where you have to catch up Sad

Well, if you have the priority set in a way that the 2GHz is before the 1GHz machine, wouldn't it compile it on the 2GHz machine?
-or- you could just take the 1 Ghz machine out of the loop. Tell it not to use the slower one. Like, if you want to compile something on a 1GHz machine, and there's 3 2GHz machines, it'd still be allot faster just not to compile on the 1GHz machine, and take localhost out of the hostlist.... dunno :)


780
Mini-Reviews by Members / Re: Distributed compiling and clustering
« on: March 29, 2006, 11:07 AM »
Isn't there this issue with distcc, though, that you will be somewhat bottlenecked if some machines in the compile farm is noticably slower than the others?

Only when it is misconfigured,... there is a faq entry on the distcc site about this. When you set the hosts in the proper order and you set the correct -j option it should be fine.  The machines with most speed (or less load) you set at the beginning of the hosts list. If you'd have a huge farm that is also running lots of other things, you could write an easy little script that updates the order according to the load on the different machines. Running it on a home network with a few computers you shouldn't even ever run in to that problem.

cygwin sucks and doesn't count


I agree, but at least you can build a windows binary with it using distcc, which is better than nothing at all I guess. I also dislike cygwin, which is why I switched to gentoo on vmware.

Btw, you should check out http://ccache.samba.org/ and combine it with distcc...

Yeah, distcc works great with ccache :)

781
Mini-Reviews by Members / Distributed compiling and clustering
« on: March 29, 2006, 10:30 AM »
Distributed compiling and clustering

Compiling large amounts of code takes time, sometimes even massive amounts of time. Some of us see the test-build as a well deserved coffee break, but after a while it gets old. Having to wait for a long time, in some cases hours, for a compile to finish, is not only counter-productive but makes debuging in many cases alot harder. (eg: you can't just make a quick change in your code, quickly run it, and see what it does.) 

The solution: Gather around those old computers you have lying around everywhere (and I'm sure many of us do), and set up a little compile farm to help your computer crunch some code.

The tools: Finding the right tool for the job can be a bit tricky, and reading up on clustering might make your head explode initially, but there are truely some great tools out there that make this all to easy, and that's what this review is about.

In my short adventure through the distributed compiling and clustering world, I have run in to quite a few options:



The first was building a beowulf cluster with tools such as heartbeat (http://www.linux-ha.org/).
From wikipedia(http://en.wikipedia.org/wiki/Beowulf_cluster):


A Beowulf cluster is a group of usually identical PC computers running a FOSS Unix-like operating system, such as GNU/Linux or BSD. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them.



Unfortionally not all my computers are identical so this was not an option.



Then there is openMosix, which is a kernel patch for linux that lets you share cpu power and memory over any number of machines, or as wikipedia (http://en.wikipedia.org/wiki/OpenMosix) describes:


openMosix is a free cluster management system that provides single-system image (SSI) capabilities, e.g. automatic work distribution among nodes. It allows program processes (not threads) to migrate to machines in the node's network that would be able to run that process faster. It is particularly useful for running parallel and intensive input/output (I/O) applications. It is released as a Linux kernel patch, but is also available on specialized LiveCDs and as a Gentoo Linux kernel choice.




And last but not least there is distcc. Distcc is actually the only one that will work with windows. Distcc is different from all of the above as it focuses on distributed compiling rather than regular clustering. It requires very little setup.  You can use it togeather with ccache, which makes it even faster.  From wikipedia(http://en.wikipedia.org/wiki/Distcc):


distcc works as an agent for the compiler. A distcc daemon has to run on each of the participating machines. The originating machine invokes a preprocessor to handle source files and sends the preprocessed source to other machines over the network via TCP. Remote machines compile those source files without any local dependencies (such as header files or macro definitions) to object files and send them back to the originator for further compilation.




Note that none of the above requires any tampering with makefiles or creating complex build scripts.



The results:

openMossix had a fairly easy setup ( just configure / install the kernel and run the daemon ) and did seem to do a good job with applications that are cpu-intensive (such as a compile job) however I ran into some problems now and then, getting segmentation faults. I assume it's my fault, but after playing with it for a day I was ready to try something new.

Distcc was VERY impressive. It seems like the perfect tool for the job. Setting it up was very easy (just install distcc, and set it as your default compiler, it has a configuration tool that sets the participating hosts, and you just have to start the distccd daemon specifying which ip's to allow) and it worked right away. Required though is that your build-environment has the same versions of things. (like same version of gcc, ld, etc,..) but that's quite easy to deal with. I must say the speedup was significant. distcc comes with a monitoring tool (openMosix does too) that shows the running jobs on the farm. Now I can finally compile things on my slow computer, taking advantage of the speed of my faster computer. :) I tested distcc with one machine running windows (distcc running in cygwin) and the other running gentoo linux (http://www.gentoo.org ). Because both platforms were different I had to set up a cross compiling envoronment (binutils come in handly) which worked out just fine. Later I tried it with one machine running gentoo, the other gentoo on vmware with windowsXP host. I must say this was the easyest of all, as there was no additional setup needed for cross-compilation.

I was also reading that you can set up distcc to run on openMosix, but i did not get into that. (I'm curious as to what the difference would be in benchmark results with just distcc and distcc+openMosix)



Conclusion:
Distcc seems to be the best tool for the job,  and to save yourself some cross-compiling trouble, the uber easyest is to set it up on the same platform.


Screenshots:

http://images.google.com/images?q=openmosix&svnum=10&hl=en&lr=&client=firefox-a&rls=org.mozilla:en-US:official&sa=N&imgsz=xxlarge

782
I was trying to make it so he looks worried, up into the sky,... tricky with a bird :)

783
Watch out, Codey! The're gonna come and get you !!!


784
Living Room / Re: 10 coolest alarm clocks
« on: March 13, 2006, 02:01 PM »
Or you can build yourself a nice nixietube clock

http://www.electrics....co.uk/nixclock.html

785
Living Room / Re: 10 coolest alarm clocks
« on: March 13, 2006, 01:52 PM »
This clock pwns them all.

http://www.cathodecorner.com/sc100.html


786
Try BitComet. ( www.bitcomet.com )
It's written in c++ and runs very smooth :)

787
Living Room / Re: Cody Clothes
« on: February 13, 2006, 06:41 PM »
All i can say is ...

788
Just use an old but good stereo amp. The quality of these old amps far surpasses the new hifi crap imho. :)
I have an old Technics amp (100w/channel) hooked up to my pc which works great. I used to have 2 homemade rack mounted mono amps of 500W connect to it but that was kinda overkill. I've tried many surround systems but I still like old quadrophony or even stereo better.
Some of these old sixties/seventies/eighties amps are really cool looking, like retro-tech style.
Like this one has a tiny radar-like screen:

qx747.jpg

But you get the best audio quality with a tube amp.

3-D-InsB.jpg
notice the springs on the bottom which were used to generate echo effects :P
the spring sits in a coil on both ends.
pretty ingenious :)

789
Developer's Corner / Re: Best Programming Music
« on: February 10, 2006, 01:42 AM »
Arcana
Ataraxia
Bach
Beethoven
Bill Evans
Blizzard Entertainment (diablo soundtrack's ->exellent:)
Canned Heat
Carl Orff (Carmina Burana)
Danny Elfman
Darkthrone (\m/ for productivity-mode)
Darkwell
Dead can dance
Deep Purple
Die Verbannten Kinder Evas (kinda hard to find)
The Doors
Edvard Grieg (mickey mouse, anyone)
Ella Fitzgerald
Eloy (!!!!)
Estampie (midevalish stuff)
Excalibur Soundtrack
Finntroll (for hyper active mode)
Gustav Holst
Janis Joplin
Jean Baptiste Lully
Jimi Hendrix
John Mayall
John Williams (soundtracks by him)
King Diamond (HEADS ON THE WALL!!!)
Lacrimosa (good atmosphere for inspiration to me)
Led Zeppelin
Liquid Tension Experiment (from the Dream Theatre guys)
Louis Armstrong
Mozart
Negura Bunget (kinda weird stuff from Russia)
Qntal (electronic music with opera-like female vocals, kinda interesting, very nice, and very good to code on)
Stevie Ray Vaughn
Taake (about the only good norwegian black metal band still left besides darkthrone)
Ten years after
The velvet underground (Yep, Lou Reed, me too,... )
Uriah Heep (Gypsy!)
Vangelis
Wanda Landowska (Extreme harpsichord stuff, i love this stuff)
Weltenbrand
Zbigniew Preisner


790
My color scheme in Turbo Pascal was usually green on blue due to excessive ASM usage :D

791
Developer's Corner / Re: indent wars..
« on: February 09, 2006, 09:28 PM »
I've always been coding like so:

#someInclude

// Some comment

int someFunc()
{
  blah() ;
}


792
Why is it that most IDE's have a default white background with black text?

Personally it really hurts my eyes. Especially on CRT monitors. The white of a computer monitor is not like say a white paper, it is light. So especially for those of us who are coding over 8 hours a day, which is better for our eyes?
Personally I use a soft green on a black background because green is one of the colors humans perceive best. Green on black provides high contrast. I use soft green so the contrast isn't too high. Seems to work best for me. I hear yellow on blue is best for bright-lit rooms / daylight.  But black on white is overkill contrast and really bad on the eyes.

Here is an interesting page about this topic:

http://www.writer200...m/colwebcontrast.htm

793
Mini-Reviews by Members / Game Review : Silkroad Online
« on: February 06, 2006, 02:15 AM »
Introduction:
Silkroad Online is a FREE mmorpg (massively multiplayer online roll playing game) developed by a Korean company called JOYMAX. This game is inspired by the oriental silk trade routes and therefore also focuses on trading allot. Players can set up trading posts, and there is a sophisticated trading system.

Features:
  • An amazing fantasy world with stunning graphics.
  • Quests : if you don't feel like aimlessly bashing monsters, maybe you'll be interested in solving some quests? This game has plenty of them.
  • You can work on improving your skills. These are divided in weapon skills and 'alchemy' skills, which are magic-like abilities.
  • The combat system is in real time - no turnbased combat.
  • Interesting interface concept : There is an action window on which there is several action icons which can be dragged into containers at the bottom of the screen. Each of the containers corresponds with a number 0-9 which you can use as shortcut for the specified action. These containers are then sub-divided in groups with the function keys.
  • Animals!! You can ride horses, buy a camel, etc,...

First impressions:
At my first glance of the game I was totally blown away with the quality of the graphics, and how polished the game was for being free. This is truly a high-quality game. I quickly got hooked on the monster-bashing collecting money, buying stuff rpg catch, which is definitely present in this game. After I got bored with that I started exploring the quests a bit, which are also quite interesting. One of the first things I noticed is how MASSIVELY this mmorpg really is. There is really allot of players connected, and it has a well established community with plenty of guilds.

System requirements:
I mentioned before that this game has stunning graphics, so I was a bit worried when I saw the initial screenshots that my poor graphics card would not be able to handle it, but infact it did quite well. This is mainly because the game seems to be highly optimized. This game obviously wasn't written by idiots. Objects are only visible from a certain distance to save memory and keep the fps high. This distance can be manually altered. Also, they aren't magically appearing/disappearing like in some games, nor are you constantly walking around in fog (which is the solution some other games opt for), the objects have a fade/in fade/out effect, which I thought was quite original. As a result the required CPU isn't too high compared to other games with similar graphics.

Required CPU: Intel Pentium 3 - 800 MHz CPU
Required memory: 256 MB RAM
Required graphics card: 3D speed over GeForce2 graphics
Required HD space: 3.0 GB free hard disk space (though the download is 515MB)

Screenshots:
screen1.JPG
screen2.JPG

Links:

Pages: prev1 ... 27 28 29 30 31 [32]