ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > General Software Discussion

cloud processing for end users - when? already?

<< < (2/4) > >>

That's what computing originally was. Then we got personal computers.
-Deozaan (October 09, 2011, 08:26 PM)
--- End quote ---

I like that way of mainframing things!  :)

Renegade: Video transcoding was only one example. I can see many other uses: image manipulation, 3D rendering tasks, complex OCR tasks for a lot of documents, and so on. Basically, any task where
 (upload time + download time + cloud processing time) < local processing time

Bandwidth might be an issue but shouldn't be exaggerated, even for the case of video transcoding. A fairly large number of people have had 100Mbit connections at home for some time. Very far from all of course. But many enough for this kind of service to take off I'd think.

Cloud transcoding might pose some legal (copyright) problems. But I'm not too sure about that. Couldn't a generic cloud processing service claim to be a mere infrastructure? Recording digital TV broadcast with a TV card is legal where I live and so is transcoding those recordings on my PC for personal use. And a system could perhaps be built so that the local computer only uploads obfuscated calculation tasks to the cloud. Then the cloud processing service can sincerely say that they can't know or control what the processing is for.

Anyway, if legal risk explain why no company tries it then a P2P version of the same idea could still be possible.

We already have P2P file sharing and P2P proxies (TOR). I've also seen attempts at P2P cloud storage (though I can't mention a specific example). So why not P2P processing?

We actually already have a large P2P processing system at work in [email protected] . But there users only donate their CPU cycles for science.

We also already have many examples of limited cloud processing for end users already. Webapps that let us upload a file for malware check, sites that convert pdfs to other formats, and so on. But those are limited in doing only very specific tasks that the user can't modify much.

Ok, now for the third post I actually did some googling. I found these:  
"Mitch Garnaat boosts his massively scalable Monster Muck video conversion service" using Amazon AWS to transcode video already in 2008.
which mentions among other things this service

All of the above look to be aimed at corporations rather than end users.

And a system could perhaps be built so that the local computer only uploads obfuscated calculation tasks to the cloud. Then the cloud processing service can sincerely say that they can't know or control what the processing is for.
-Nod5 (October 10, 2011, 02:18 PM)
--- End quote ---

Don't know where you are, but where I sit the law is pretty definite that you won't automatically be held (criminally) liable for something if you were innocently unaware of it. (Intent is a major factor in criminal proceedings after all.) But that's not an absolute rule.  And you are also not allowed to create a deliberate blind spot and then use that as your defense against being charged as an accessory to a criminal act.

The simple fact you were deliberately obfuscating data streams would be enough to convince the average judge you were attempting to evade responsibility and culpability by use of a technical dodge. That alone could make you open to prosecution for a variety of "conspiracy to commit" charges - even in the absence of a real violation.

In short: No dice!  :)

I'm not so sure about those legal worries given that we live in a world where ISPs, commercial proxies, AWS and P2P projects like TOR and a lot of file sharing software persists despite the fact that some of their users do unlawful things through those services. I'm not sure why a "processing service provider" in the cloud should be on shakier ground than an ISP is.

Anyway, let's put legalities aside. I was more intrigued by the basic tech idea and why we don't have it built into applications and operating systems already (it would make sense in Linux desktop OSes I'd think, sharing CPU cycles with the community).

To focus on a perhaps better example imagine a thousand people all with highspeed bandwidth. Each of them now and then dabble in CPU intensive 3D rendering. When they do rendering their CPU is maxed out but it still takes many hours. Then their CPU and bandwidth idles a lot until weeks later when they need the CPU for rendering again. Here it would make sense for them to band together in a P2P processing system. Each user donates spare CPU cycles and in return can use a lot of CPU power for short bursts of time.

What you're describing sounds very much like something David Gerlernter discussed in his book Mirror Worlds.

Gerlernter proposed a mechanism whereby all the computing power on a given network could be harnessed as a sort of distributed supercomputer. This anticipated the now common notion of clustering.

cloud processing for end users - when? already?

This was taken beyond a theoretical proposal when Gerlernter developed a "coordination" language that was called Linda to accomplish exactly that. There's a NYT article entitled David Gelernter's Romance With Linda that discusses what Linda brings to the table:

<<Edit: items quoted from NYT removed by 40hz. Use above link to read article.>> :)

What made Linda different from basic clustering (and far more interesting) was that each member of the Linda assemblage could 'negotiate' processing availability or demand with other devices on the network rather than have it statically assigned by a human scheduler. As Gerlernter characterized it, you just toss your problem to Linda, and Linda figures out how best to run it based on what else she currently has on her plate resource-wise.

Does this sound something like what you're talking about?  :)


[0] Message Index

[#] Next page

[*] Previous page

Go to full version