topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Friday March 29, 2024, 6:01 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: Compression: BigByte  (Read 10295 times)

alecjw

  • Participant
  • Joined in 2005
  • *
  • Posts: 42
    • View Profile
    • Donate to Member
Compression: BigByte
« on: December 31, 2005, 04:29 PM »
I made a 1GB file with BigByte and compressed it. The file size when compressed was about 70KB. I'm guessing that that's because BigByte creates files which are just 11111111111111111111111111111111 etc and when compressed it is stored as 8589934592 1's rather than 11111111111111111111111111etc. Maybe you could make it make files which are 10101010101010etc or maybe even randomly generated to stop such decrese in file size when compressed.

LuckMan212

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 137
    • View Profile
    • Donate to Member
Re: Compression: BigByte
« Reply #1 on: December 31, 2005, 06:30 PM »
why would you compress it?

alecjw

  • Participant
  • Joined in 2005
  • *
  • Posts: 42
    • View Profile
    • Donate to Member
Re: Compression: BigByte
« Reply #2 on: January 01, 2006, 05:46 AM »
Good point. I just compressed it and thought that it didn't seem right. But thinking about it, I suppose wouldn't need to compress it...

LuckMan212

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 137
    • View Profile
    • Donate to Member
Re: Compression: BigByte
« Reply #3 on: January 01, 2006, 09:10 AM »
i suppose there might be a valid reason for wanting to compress the files... one example might be to test the effectiveness/compare the different compression formats (ZIP, RAR, ACE, LZH, etc).  With all 1's this is not really possible.  With a string of random characters it would be a more accurate measurement of real world performance.

mouser

  • First Author
  • Administrator
  • Joined in 2005
  • *****
  • Posts: 40,896
    • View Profile
    • Mouser's Software Zone on DonationCoder.com
    • Read more about this member.
    • Donate to Member
Re: Compression: BigByte
« Reply #4 on: January 01, 2006, 09:38 AM »
having the program have an option to generate random bytes would be nice.

alecjw

  • Participant
  • Joined in 2005
  • *
  • Posts: 42
    • View Profile
    • Donate to Member
Re: Compression: BigByte
« Reply #5 on: January 02, 2006, 11:55 AM »
Yay! Thats my first idea which hadn't been thought of 10 years before i thought of it. :)

skrommel

  • Fastest code in the west
  • Developer
  • Joined in 2005
  • ***
  • Posts: 933
    • View Profile
    • 1 Hour Software by skrommel
    • Donate to Member
Re: Compression: BigByte
« Reply #6 on: February 23, 2006, 07:46 PM »
 :) I've pretty much given up on BigByte after I tried a program that could make a TerraByte file in seconds! I think it just made an entry in the FAT. But I can't remember where I found it...

Adding random content is impossible the way BigByte operates. It makes a 1 byte file and makes double sized copies until the desired size is reached, merges the needed copies and removes the rest.

Skrommel
« Last Edit: February 23, 2006, 07:59 PM by skrommel »

db90h

  • Coding Snacks Author
  • Charter Member
  • Joined in 2005
  • ***
  • default avatar
  • Posts: 481
  • Software Engineer
    • View Profile
    • Bitsum - Take control of your PC
    • Read more about this member.
    • Donate to Member
Re: Compression: BigByte
« Reply #7 on: February 23, 2006, 07:50 PM »
With all 1's this is not really possible.  With a string of random characters it would be a more accurate measurement of real world performance.

Actually, neither would represent real-world performance. Real-world benchmarking would require compilation of various common file formats. Compression targeted towards a specific file format(s) does much better than an algorithm not targeted for them. Therefore, if you were to do tests on pseudo-random or arbitrary data, the best performing compression algorithm would not necessarily be the best performing algorithm on real-world data sets.