topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday March 29, 2024, 6:16 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic  (Read 8971 times)

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,896
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Interesting blog post on readwriteweb.com, on a presentation at the Web2.0 Submmit.

While looking through Mary Meeker's 2006 Web 2.0 Summit presentation, I was struck by the figures on page 19: "Peer-to-Peer (P2P) traffic was 60% (and rising) of Internet traffic in 2004, with BitTorrent accounting for 30% of traffic, per CacheLogic". You can definitely see why this is the case, as P2P is normally used to download very large media files - music, movies, etc. But still it makes you realise just how big P2P currently is on the Internet and, given the increasing amount of video coming onto the Web, how crucial it is going forward.


jgpaiva

  • Global Moderator
  • Joined in 2006
  • *****
  • Posts: 4,727
    • View Profile
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #1 on: December 07, 2006, 06:29 PM »
That's a cool graph! :D
It's interesting to see how it's been developing. Specially, how fast p2p is growing.
As for how much "market share" it has, i don't find that particularly surprising, as there are already lots of open source stuff that is released under torrents, and it appears to be the best way to take advantage of everybody's connection and not only the servers'.
Also, most of my friends download something in the order of the tenths of GB every month, and i bet that isn't made out of web traffic ;)

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,896
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #2 on: December 07, 2006, 06:31 PM »
I'd love to see a graph like this comparing real mail vs. spam mail over time.  anyone know of any such charts?

Second Shadow

  • Participant
  • Joined in 2006
  • *
  • Posts: 22
    • View Profile
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #3 on: November 25, 2007, 07:31 PM »
I'd love to see a graph like this comparing real mail vs. spam mail over time.  anyone know of any such charts?

Hmmm ... this is the best such chart I could find:



It comes from the Distributed Checksum Clearinghouse website.

Overview:

The DCC or Distributed Checksum Clearinghouse is an anti-spam content filter that runs on a variety of operating systems. As of mid-2007, it involves millions of users, tens of thousands of clients and more than 250 servers collecting and counting checksums related to more than 300 million mail messages on week days. The counts can be used by SMTP servers and mail user agents to detect and reject or filter spam or unsolicited bulk mail. DCC servers exchange or "flood" common checksums. The checksums include values that are constant across common variations in bulk messages, including "personalizations."

The idea of the DCC is that if mail recipients could compare the mail they receive, they could recognize unsolicited bulk mail. A DCC server totals reports of checksums of messages from clients and answers queries about the total counts for checksums of mail messages. A DCC client reports the checksums for a mail message to a server and is told the total number of recipients of mail with each checksum. If one of the totals is higher than a threshold set by the client and according to local whitelists the message is unsolicited, the DCC client can log, discard, or reject the message.

Because simplistic checksums of spam would not be effective, the main DCC checksums are fuzzy and ignore aspects of messages. The fuzzy checksums are changed as spam evolves. Since the DCC started being used in late 2000, the fuzzy checksums have been modified several times.

Unless used with isolated DCC servers and so losing much of its power, the DCC causes some additional network traffic. However, the client-server interaction for a mail message consists of exchanging a single pair of UDP/IP datagrams of about 100 bytes. That is often less than the several pairs of UDP/IP datagrams required for a single DNS query. SMTP servers make DNS queries to check the envelope Mail_From value and often several more. As with the Domain Name System, DCC servers should be placed near active clients to reduce the DCC network costs. DCC servers exchange or flood reports of checksums, but only the checksums of bulk mail. Since most mail is not bulk and only representative checksums of bulk mail need to be exchanged, flooding checksums among DCC servers involves a manageable amount of data.



f0dder

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 9,153
  • [Well, THAT escalated quickly!]
    • View Profile
    • f0dder's place
    • Read more about this member.
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #4 on: November 25, 2007, 08:26 PM »
It's really sad the open-source guys haven't embraced torrent technology; whenever I've found a project with a torrent release (usually that would be stuff large enough to warrant the "complication" of p2p vs. a simple download, like a linux or bsd iso image), either the tracker has been down, or I have gotten really lame speeds (as opposed to some of the university http servers that host the distros and can easily reach 2mbyte/sec).

The people involved in distribution should really get together, I'm sure they could save a considerable amount of bandwidth and system load if a considerable amount of them set up torrent servers instead of the traditional ftp and http for their ISOs. Would also make distribution to the sites very easy & automated, if set up properly.

I wonder if the Trend Watch takes encrypted (SSL/TLS, torrent protocol-encryption, and SSH tunneled) traffic into account... I have a feeling that things like encrypted FXP between scene topsites has slipped out of this trend. And even if it doesn't quite live up to the combined trickle of all the p2p "end-users", it ought to amount to something :)

No wonder that whole net neutrality debate started.
- carpe noctem

Ralf Maximus

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 927
    • View Profile
    • Read more about this member.
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #5 on: November 25, 2007, 08:28 PM »
The thing about p2p is this: a few users can generate a disproportionate amount of traffic.  One p2p client can chew up gigabytes of bandwidth per day, as compared with typical users whose usage patters are more "bursty".  Normal folk may watch YouTube or download mp3s occasionally, but that pales in comparison.

It's why Comcast has been motivated to sabotage p2p networks and why the days of "unlimited" access may be numbered.  It's a bit like everyone in a house chipping in to pay for telephone service, but one guy spends all day monopolizing the phone.

If EVERYONE were to abuse p2p, the internet would probably collapse under the load.

f0dder

  • Charter Honorary Member
  • Joined in 2005
  • ***
  • Posts: 9,153
  • [Well, THAT escalated quickly!]
    • View Profile
    • f0dder's place
    • Read more about this member.
    • Donate to Member
Re: Blog Piece: Trend Watch: P2P Traffic Much Bigger Than Web Traffic
« Reply #6 on: November 25, 2007, 08:36 PM »
If p2p was efficiently hampered, you'd see a massive decrease in fast ADSL lines :)

If EVERYONE were to abuse p2p... collapse... hmm. Perhaps. There's already a lot of 10/10 and 100/100, even some gigabit, "seedboxes" being used, coupled with the ever faster home connections (relatively inexpensive fiber with 10/10 as standard and even more available is becoming every more available). But we can already see that from the graphs :)
- carpe noctem