ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

Want to build a new OS today? It's going to cost you $1.4bn (or more)

<< < (2/3) > >>

40hz:
I want a copy of that poster! 
-40hz (June 09, 2010, 07:45 AM)
--- End quote ---

-then read the next lines from your link...

If the DIY approach is too daunting, Linuxcare will be printing a run of posters at 120cmx168cm (about 4' x 5'8") for sale. If you are interested, send us mail.

A preview version was also sent to Everything Linux in Australia, so they may have them available for sale soon, too.
--- End quote ---
-Curt (June 09, 2010, 08:25 AM)
--- End quote ---

I did.

I was sharing. :P

zridling:
I also think it would depend on what you would want the OS to do. But it does seem that so much of the innovation in the last ten years -- mobile, embedded, server, appliance -- starts with the kernel and peels off whatever is not needed. No need to reinvent the wheel, so to speak. Especially if it scales.

40hz:
I also think it would depend on what you would want the OS to do. But it does seem that so much of the innovation in the last ten years -- mobile, embedded, server, appliance -- starts with the kernel and peels off whatever is not needed. No need to reinvent the wheel, so to speak. Especially if it scales.
-zridling (June 09, 2010, 08:45 PM)
--- End quote ---

Thanks for pointing that out.

I try to be open minded about a lot of the criticism directed at Linux. But it's getting increasingly harder to remain civil while hearing people bashing a working system when they don't provide anything concrete to show how it could be done better. All I usually hear is how it "sucks" based on some pet theory or a design paradigm that's  found in an academic paper - and never anywhere else.

Real world software design and implementation is a very different beast than some pie-in-the-sky whitepaper or PhD thesis.

Look no further than Cobb's relational database model for one example. Most (and possibly every) relational database that has ever been coded deviates to a greater or lesser degree from Cobb's rules. But while these products may not have the mathematical "purity" of relational theory, they do work. And even Cobb has reluctantly admitted, in his more candid moments, that the additional complexity of implementing a fully "correct" relational database product would likely outweigh any benefits to be gained by doing so.

The general rule I was taught about software development budgets ran something like this:

For any non-trivial software development effort:

* The first 90% of the project will consume 90% of the available time and budget.
* The final 10% of the project will consume an additional 90% of the original allocated time and budget.
And while it's probably not a good idea to generalize, I have rarely seen major software projects where that wasn't the case. Especially if they were system-level development projects.

So what I would politely like to ask all the Linux trolls out there is this:

Next time you feel the need to bash the work of others, could you at least have the decency to show us some code you've written (and are willing to contribute) that does anything better than what's currently being used?

Because until you do, it's a little hard for those of us in the Linux world to see such comments as being anything other than "brag and bounce."

And we can scoop that substance off a stable floor anytime we have a craving.

Submitting a correction or a replacement piece of code garners more credibility, and yields more benefit to everyone than sniping ever will.

Remember: De nihilo nihil... as one wise Roman so aptly observed.8)

Deozaan:
The sad thing is that there are only two OSes in the world. UNIX and Windows.

We could certainly use some fresh innovation in the OS department.

40hz:
^Just out of curiosity, what exactly are today's operating systems lacking that major innovation is called for? I'd agree there's always room for improvments in clarity, efficiency and speed. But on a fundamental level, what needs to be changed? And as long as we're sticking with a Von Neumann architechture, what really can be changed?

If we had true parallel processing it might be a different story. But building transputers for desktop and general servers doesn't look to be in the cards any time soon despite the fact we have known how to build them for something like 40 years. Danny Hillis's brilliant Connection Machine is the only parallel system I'm aware of that actually got some traction. But even with all the excitement and press it got, it still only saw limited deployment on some extremely specialized projects.

Which begs the question: How often is parallel really called for?  

Right now it looks like the old fashioned "VN" architechure tricked out with some fancy hypervisor to provide virtual machine environments and limited (as in semi-faked) parallelism is where it's gonna be going. And that's mainly because it's good enough for what we need it for.

And as long as the chips keep on getting faster (and less expensive) - does it really matter? Hardware development costs are cheap when compared to software development expenses. Software costs don't benefit from efficiencies of scale like hardware does. Nor does prior product experience help that much in holding costs down. Software 'reuse' continues to be an elusive goal despite two decades of OOP programming practices. Most system software - and virtually all "breakthrough" applications - are written from scratch because it's still more efficient to do it that way.

And most times it's more prudent to run your "old but working software" on a faster machine than it is to try to improve the code beyond a certain point.  

Your thoughts?  :)  

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version