topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday March 28, 2024, 6:42 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - rkarman [ switch to compact view ]

Pages: [1] 2next
2
Suffice it to say, I have personal evidence that you're wrong.  It really depends on the judge, the lawyers, and the day of the week.

Even better, if you didn't have that evidence, the other party would have that evidence instead... No one goes to court thinking they're going to lose, yet half of the people do. Often it's a case of not knowing the rules of the game or understanding the legal aspects, and ... like you said ... sometimes it's just Monday morning.

Anyway, it's all besides the point that Microsoft made this promise is very precise legal wording. From that wording you can conclude that they placed themselves in a corner with their pants down, if their intentions would have been to sue anyone for patent infringement. Their chances to win such a case went down to near zero, unless if you have a drunk judge on Monday morning.

3
Not true, judges also need to follow the law, and in contract law the main rule is the "Parol Evidence Rule".

It basically boils down to this: If the written words of a contract are enough to come to a conclusion, but one party gives testimony in court that their intention was different, then the judge is not allowed to use the testimony and has to follow the written words in a ruling. Even if the judge might feel that both parties understood the intention, he/she cannot follow the verbal testimony.

Example: Microsoft gives testimony in court that they meant that the promise was not a contract that's enforceable, then the judge is still not allowed to rule this, because the legal definition of the word "contract" is
Bilateral and Unilateral Contracts The exchange of mutual, reciprocal promises between entities that entails the performance of an act, or forbearance from the performance of an act, with respect to each party, is a Bilateral Contract. A bilateral contract is sometimes called a two-sided contract because of the two promises that constitute it. The promise that one party makes constitutes sufficient consideration for the promise made by the other.
note the specific mention of the word "promise" here.
So due to the parol evidence rule, a jude would not be allowed to rule that the promise is not an contract in this case.

Other example: Microsoft testifies that it had the intention the contract was terminated retroactively. However because the legal definition of the word "termination" is
The termination or cancellation of a contract signifies the process whereby an end is put to whatever remains to be performed thereunder. It differs from Rescission, which refers to the restoration of the parties to the positions they occupied prior to the contract.
So due to the parol evidence rule the judge is not allowed to grant a retroactive termination, because the correct legal wording for that would be "rescission" or "retroactive termination" and neither was used in Microsofts promise.

For whomever wonders what the exact definition of the parol evidence rule is: http://legal-diction.../parol+evidence+rule


It seems to me that Microsoft took great care in the wording of their promise to make sure they can enforce it in court. I have this feeling because of the way how the promise is set up: its a promise between Microsoft and you personally, and you personally will not be involved in a patent case against them. And if you are the promise will terminate. All of this points to the promise not being valid as soon as you are standing in front of a judge, by which time the judge will ask Microsoft if they have a counter claim. I speculate that they wanted to give a feeling of security to the open source community, which is why they didn't retroactively terminate the promise. With the amount of patents Microsoft holds, that's not really needed anyway.

Microsoft could not have pulled their pants down much lower in my opinion, but if you still like to think that Microsoft has some evil intentions with this patent grant, then, like with any contract, you're of course allowed to not accept it. If you're a programmer however, you're most likely still infringing one of those many thousands of patent and intellectual property rights they own.

4
^ It's a little more tricky than it looks.

Basically, under this wording, if you are using .NET and you ever assert an IP claim against Microsoft for any reason, you automatically become a provable infringer on Microsoft’s IP because their "personal promise" is automatically withdrawn. So in short,: use .NET, try to sue us, and you're now an infringer.

So while your infringement claim remains to be proven, Microsoft’s infringement claim against you is already established.


Not really..

First of all "promise" is legal wording and is legally binding, and this particular promise is not retractable for any other reason than you participating in a patent case against Microsoft. The only way to stop this promise to be valid for future .NET versions is for Microsoft to not make those future versions to begin with.

Second, in contract law the term "termination" means that both parties are released from their obligations to effect and to receive future performances. Any claim for compensation of past performances needs to be made before the contract is terminated. In this case the termination is "automatic", making any arrangement before it takes effect impossible.

5
Still, uber-complex legislation is bad, mmmkay?
I agree

The point of the "Lessons from Grand Central Terminal" article was that the complexity introduced in new laws was bad because it only creates more points of failure. Personally I think the bad part of the complexity in legislation is that it hides the fundamental flaws in them (especially from the usually not so bright politicians).

Take the BP example of the article: "Even the government’s response to the tragically ongoing BP oil spill has been one of triangulation and determined-complexity. Get some supertankers to siphon off the leaking oil? Nope. Help Louisiana Gov. Jindal to build some temporary barrier islands along parts of the coastline?  No sir.  Keep a boot on the throat of BP — hey, that’s a killer sound bite!  Let’s go with that!"

The solutions proposed by the article are overly simplistic. They will work and do well for this specific case, but they will not prevent a next oil spill… The harm here is that the solution is still flawed, even though it looks solid.

Corporations are invented with the sole purpose to parasitize off of the common wealth. If you designed the law this way, you should not put out the accidental fire it caused and then think you solved the structural problem. Corporations are by law not liable for the cost they impose to future generations and the public as a side effect of making a quick buck. No oil drilling company has to provide replacement energy reserves for future generations, and no oil company pays for cleaning up future oil spills. This lending upon the future is true for any kind of corporation.

BP was allowed to parasitize society. They are even required to do so by law!

The main problem of the article was again not analyzing the validity of the example used and misusing the example in such a way it has a killer sound bite...



Anyway I was more interested in the software design side of this discussion, since I do believe complexity is a problem when it grows exponentially (which it does almost automatically if you are not very strict in maintaining architecture).

Take building a house as an analogy.
Doghouse (or small software): You take a few timbers and nails; throw them together and voila a doghouse.

Let’s size it up 10 times.

Normal House (medium sized software): You take a few bricks and mortar; throw them together and voila a normal house. It might just work although having it architectured is usually a better solution.

Let’s size it up 10 times.

Cathedral (highly complex software): You take a few bricks and mortar; throw them together and voila a … uhmm ... pyramid?

Without design a cathedral cannot support its size. The amount of materials is too big and the whole thing will collapse under its own weight. The only way to get a lean structure with so much empty space in it is via architecture and design. Complexity is not the problem, complexity without architecture and design is.

6
Think about this:  A system with 50 required parts, each one having 95% reliability, has an overall reliability of .9550^50, or barely 8%!

If it think about this I come to the conclusion that it's not true... The internet is built with millions of components (if not more) that all have a 99.98% reliability. According to this theory the overall reliability would be something like:  .9998^1000000=0.000000..(80more zeroes)..1356 etc.

Even if we would consider the internet build of a total of 10000 devices with each a reliability of 99.99999% we still end up with an overall reliability of 1%.

I think that the formula used is somewhat flawed or can clearly not "simply" be used as generic as it is used in this text. So the simplicity of the original article proved that too much simplicity also introduces errors.

The real lesson is ... Simple == Unreliable, Complex == Unreliable. Reliability is not just an accidental by product of complexity, it needs to be designed and implemented. The author of "Lessons from Grand Central Terminal" clearly didn't have reliability in mind when designing & implementing his article.




7
General Software Discussion / Re: Scripting vs. Programming
« on: January 20, 2007, 02:31 PM »
Thanks for the explanations. But is there a point at which a script becomes a program? f0dder makes a good point with javascript. The line is gray from my standpoint.

this is not a grey line at all, when a developer says that it is a script, then it is a script. when the developer says it's a program, then it's a program. it is what it is said to be by the creator, simple as that 8)

8
General Software Discussion / Re: Scripting vs. Programming
« on: January 20, 2007, 02:16 PM »
well, i think we're trying to find the theory of everything here. the one formula that describes the whole universe (of programming). practically seen the words were made up years ago and do not have the same meaning anymore. like low and high level language for instance. a dotnet (virtual) machine has msil as low level language, this language is typesafe and is object oriented though. provides full support for scope visibility, garbage collection, etc... is that low level???

now scripting vs programming. it used to be glossaries, macro's and scripting were automation "scripts" that would execute a chain of tasks in a system. this could have limited programming features with it. today however with scripting languages like javascript you have an almost full blown OO language which delivers features that .net doesn't even have (superclassing). it is like dotnet, compiled to intermediate code (jit compiled) and then executed at a speed comparable to a java application. jes javascript even has reflection... now i ask you, is that a scripting language???

now having explaind that any simple definition is not valid since it can never match the past and the current situations, today the closest definition to a scripting language is i.m.h.o.

"a language that is not compile time type checked and a language that runs directly from it's source code. (compiling + executing in 1 go or interpreting)"

this is however a very poor definition since the origional basic languages would be a scripting languages with this definition, it is the closest (simple) match though that i can think of.

9
General Software Discussion / Re: My favorite software! What's yours?
« on: October 10, 2006, 08:13 PM »
Favorite Software:

1 windows professional (2000 or higher)
2 visual studio 2005 (also available as free version) -> http://msdn.microsof...com/vstudio/express/
3 reflector (free dotnet decompiler) -> http://www.aisto.com/roeder/dotnet/
4 ollydbg (free win32 x86 debugger/disassembler) -> http://www.ollydbg.de/
5 sql server 2005 (also available as free version) -> http://msdn.microsof...com/vstudio/express/

as you can see i am fully development & reverse engineering minded ;)

p.s. visual studio would be at nr1 if i didn't need windows to install it, lol.

10
But anything that outputs to vbscript can hardly be that complicated ;)

LOLOL

11
General Software Discussion / Re: Data recovery software suggestions?
« on: September 15, 2006, 03:25 PM »
RECOVER MY FILESSSSSSSSS


it saved me just 2 weeks ago when Norton messed up my partition tables and i had to make new partitions on my disk, format and reinstall windows. this program actually got 95% of all my stuff back after all that!!!


so i say: recover my files +10 (or was i only allowed to do a +1?)

12
yeah, i meant a compiler like Joel Spolsky wrote is not a big deal at all. like compiling some language to another (non machine) language.

i guess writing a genuine compiler could be done in a few months too. but you have to have a lot more knowhow to pull that off. if you have to first go figure out the PE format or the machine operations of the target platform then you are more looking toward a small year worth of work, lol.

13
Writing a compiler is not such a big deal, especially if you get back that you only have to maintain 1 codebase instead of 2 or more.

Then i have to agree that typeless/dynamic typed languages (like ruby) are really bad news for professional enterprise systems (not looking at speed at all here) but dynamically typed languages delay the moment you find bugs, and this delay can be deadly in enterprise software. i have to admit i frowned when i read he was actually saying PHP was a good choice.... or did they strong type it by now?

To me all this guy says makes perfect sense and i suspect everyone going nuts at him are all hobby programmers. which hobby programmer cares if their code fails (on some type error) after if has been tested and is running for a month? not many i guess. personally i always try to proof my code is correct instead of proofing that my code is not correct (that last one is called debugging) and dynamic typing can only proof your code is not correct in some obscure situation when it happens!

14
i'm sure many of you think this is an obvious story, but i'm going to tell it anyway for all who didn't know yet :-p


there are 2 types of variables in programming, one is called a value type the other is called a reference type. very nice that you have them both, but what does it actually mean? well, a value tpye variable is actually quite easy to explain, it's just a variable that can hold a value. then the question comes, what is a reference type variable then? well a reference type variable is a variable that has a value (of course) and also a reference to itself.

now the reason why we need 2 types of variables becomes clearer when you understand what happend when a variable is passed to a method or function.

lets look at the normal situation first:
when you pass a value type variable to a method, the actual value of the variable is copied into the function. so the function gets a new copy of the value to work with. this means that if you change the value of the variable inside your method, the origional value of the variable outside your method is not changed at all.
when you pass a reference type variabel to a method, the reference to the variable is copied into the function. so the function gets a new copy of the reference, but not of the value. this means that if you change the value of the variable inside your method, the origional value of the variable outside your method is changed.

now you can also pass variables "by value" or "by reference" in some languages. usualy this mean that reference type variables are always passed by reference, even if you declare the method to pass "by value".
for a value types it means that a reference can be created on the fly when you pass the variable "by reference"

now there is one more tricky thing about all this reference and value business, if you want to know if changing your variable inside a method will change it outside the method too then you need to know about something we call "immutable variables" too (yes i swear we are done after this one). basically when a variable is immutable it means that it can not change. of course it's not the same as a constant though, because you can make a new copy of this type of variable and change the reference to point to this new copy. this way it looks like your variable changed, and it also looks like other references (that did not get updated) still have the old variable value (in fact they have the complete old unchanged variable).

phewww, i bet some of you need to read that 2 times :-P


so now some ways these are all used
value type: usually has small data (like numbers) and can not be NULL, stucts/stuctures
reference type: usually has lots of data (like strings), classes
immutable: usually normal reference type variables

not in all languages this is the same though, just in most. C for instance has the char type which is basically a string like type, it is not immutable however (this is one of the reasons why C is called a low level language by some)

15
i just realize i should give an example of why it is bad

consider the following code (in c#):

string getnode(string xpath)
{
        try
        {
                return myconfigfile.SelectSingleNode(xpath).Value.ToString();
        }
        catch
        {
                return "";
        }
}


at first it looks like this is a good routine (a lil lazy written maybe) because whatever happens you always get an empty string instead of an null pointer exception. even if the node did not exist.

but what happens if you feed an invalid xpath string? the routine also returns an empty string! this can cause a lot of things to go wrong in your code, but not at the point where it fails, but later on where the results are used of your code that failed. debugging those errors is a crime.

now look at this code:

string getnode(string xpath)
{
        Node mynode = myconfigfile.SelectSingleNode(xpath);
        if (mynode == null)
                return "";
        else
                return mynode.Value.ToString();
}

this also returns an empty string if the node doesn't exist, but if you feed it a wrong xpath string this code will crash. so the first time a developer runs his code that calls this function, he will get an warning about invalid xpath syntax.

the try catch unnecesarrily catches all or a subset of errors. by instead checking for the speciffic anticipated situation we were not caught off guard when writing some flawed xpath string.

of course there are situations where this example would not apply at all, but i think it is a good thing to not to write code this way by default


hope this clarifies my view a lil more

16
yeah i agree. i meant you should not handle that on each function call though but on the highest level possible in your code. the idea is not to have no error handling at all, but just one error handler at the top most level.

this post was more to warn people against writing try cach around everything they do.

17
Error handling assumes there are errors in your code, this is of course a bad assumption and all time spend to making error handling should be spend to error checking instead, well this is my view at least.

why do i make such a wild claim that error handling is not a good thing?
well in my oppinion there are 3 types of errors:
1 unforeseeable and unrecoverable errors, in this grou you would have for instance the "out of memory" exceptions. well what can you do if this happens? not much actually, most likely you don't have the memory to do any recovery to begin with.

2 errors that function as output of an function, in this group you would have an "read only" exception on an file operation. the exception is thrown as kinf of a return value/status. this kind of errors should be handled, but can be argueed to be more like status messages or return values and would always be mentioned in documentation.

3 foreseeable erros, in this group you would have errors like "divide by zero" or "null reference" exceptions. basically instead of of trying to catch them later on it's better to check for these to go happen up front. if you forget to write the check and handle things properly before doing an divide it is better to not try to recover this later on. trying to recover would only make it easier for the bug to slip through your tests and checks and for the automatic recovery to make your program behave odd.

of course you should handle errors somewhere (on the highest level for instance) to output them to a log or display them on the screen instead of letting the code really crash. the point is though, don't write to much error handling! (like in every function or method) it's one of the biggest mistakes you can make in getting bug free code (i know it sounds odd, but try it sometime)

18
well, i don't trust time on a computer at all, but then again i don't trust my government either ;)


funny thing is we get calls from customers that ended before they begin, if you receive such data from a few 100 million dollar satellite system you know enough!

19
While developing some code at work i decided to write a class called Unit that would handle all translations between units of measurement. from seconds to minutes, meters to kilometers, bytes to bits, etc. a nobel task to ease the burdon of other developers in the team and to reduce errors in their code. so i started and made my class, and after finishing it i debugged it of course :)

now, end of story right ...    nope!!!


after writing a pricing engine for a new satallite system i found out it was impossible to use my class and get correct results! i kept debugging and debugging, and after having debugged the unit class several times i decided to take a real close look at it and super debug it once more.

so now we have the end of the story right?   ... nope...

yet again the super debugged unit class was giving trouble and it took me a whole night of stepping and debugging to realize something trivial. units word really odd! there are units that translate as follows:

1 minute = 60 seconds (this makes the factor 60 and we need to multiply to get from minutes to seconds)

then we also have:
60 calls per minute = 1 call per second (this makes again the factor 60, but instead of multiplying to get from minutes to seconds we need to divide this time!)

it turned out that all my troubles came from forgetting that unit per unit translates diffrent then a unit by itself!


the moral: you can make a reusable class for "Units of Measurement" and to translate them into eachother, but don't forget that there are 2 types of translations!

20
time is a weird thing on the computer, in real life time is already weird since there is a future and a history. but in reality there only is a now.

well on the computer it will get a lot weirder. on a computer time is not synchonious, especially with networked computers and time synchronization the time can go from a moment in time to a moment in time before. so a computer can travel back in time! then later on it can travel forward in time again. especially with software that records history or the moment some things were recorded this can have huge implications!

for instance: imagine a log mechanism that records when an action is done, then the user can look in the log to read back if they did something or not or if the computer recorded the action and performed it correct. well if you display the log sorted on the date/time you recorded the action at, then it might be that (say if a clock chip failed) the last entries are sorted all the way to the beginning of the log. now the user checks if the action they did is performed correct and they see nothing is done at all (because it is not at the end of the log but at the beginning).

well, this seems a small problem but i can tell you that at my company (mobile satellite phone business) we could cause a lot of trouble this way. a user could accidentally added too much credit to their satallite phone with all the trouble that comes with that, angry customer, phones going over their usage limits, billing diputes, etc.

the moral of the story, never forget that time in not synchronious in a computer and that dates and times are never to be trusted fully!

21
aside from not using floating point math but fixed point math, also watch out to design your database with the "decimal" type (fixed point) and not the "float" type ;)

22
Developer's Corner / Re: Make (and play) your own Xbox game
« on: August 14, 2006, 07:39 AM »
I can't help but wonder how flexible this will be and how fast (or slow) it will be.
I mentioned slow because .NET was mentioned in the article,... .Net...games...?...eww :p
And after having seen the system requirements for Vista, I'm afraid to find out the requirements for this one.

i think dotnet is especially good for building games, the execution of code is at 50% to 90% of c++ code and with cpu's getting faster each month, i think developing in c# can only bring more complex games faster to the market. i read that managed direct x is running at about 90% of unmanaged direct x, so there is not much there to worry about either.

of course if you look at the average dotnet code (written without understanding) it is many times slower then optimized c++ code, but just doing some simple things like checking what you're about to do instead of wring that try/catch can improve speed factors to over a 1000 times (yes i tested this to convice my coworkers to program differently)

one thing is for sure, to port my pacman to xbox i would choose C# over C++ any day of the week ;)

then a lil about visa: in vista the UI is made with game technology, this adds a lot of extra to the specs. yet if you compare what you need for vista to any modern game console specs, then visa isn't doing bad at all.

23
General Software Discussion / Re: Scripting vs. Programming
« on: August 13, 2006, 07:50 PM »
If it contains variables its programming

so we figured it all out, all scripting is programming. html & xml are not

euhmm... and now ...
what is xslt & sql?


(just to finish the full classification  ;))

24
Living Room / Re: Steve ballmer sells Windows 1.0
« on: August 12, 2006, 07:07 PM »
he kept his job by doing the same thing he did when he started...

need proof?

http://video.google....643&q=developers
http://video.google....403&q=developers


euhm, i have 4 words for you...

DON'T ... USE ... THAT ... STUFF ...
lol

25
Developer's Corner / Re: Flow oriented programming
« on: August 12, 2006, 06:06 PM »
i don't want to be annoying or anything but in my oppinion there is no holy grail of programming. if you make software it is always more complex then you hoped/thought it would be. if you draw a picture with an arrow and some blocks you always loose detail (did the arrow mean an invocation? was it dynamic? or was it static?) if you try to catch it all (like uml) then the drawing goes as complex as the code is and the idea of making it understandable with an piture is lost somehwere in the hundreds of printouts of beautiful uml images.

sure it's not all bad, oop offers some extra ways to structure your code for instance. but don't be fooled, it wil never bring you the gold that was prommised to be at the end of the rainbow.

i think that it doesn't matter as much which design/way you choose to organize your code as it matters that you stick with the choice you made and keep everything consistent.

having said that i think that there are a few things that are great, one of them is your own mind (of course, lol) and the other would be refactoring tools. they can really help you keep your code from going in decay, and add structure to it later on.


so the moral is: there is no holy grail, just some handy tools maybe ;) i hope you can proof me wrong though!

Pages: [1] 2next