topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Sunday December 21, 2025, 9:11 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 8 9 10 11 12 [13] 14 15 16 17next
301
Living Room / Re: What's the best registry cleaner? Ask Leo says: none
« Last post by Crush on December 25, 2007, 03:05 PM »
I also agree that most registry cleaners are not as useful as expected - also combinations do not deliver satisfying results, I think.

If you take a close look what Registry-Cleaners are doing you´ll see that only some very special branches are "cleaned". GUIDs and spreaded informations of some programs/dlls/paths/Setup-Collections are not searched/found perfectly. With each hardware/software you install or connect the registry is growing more and more with informations you´ll never be able to clean automatically in a perfect way.

I sometimes let a cleaner run over it (just for fun), but the best results I get by hand. I must admit that in the beginning I killed too much entries but with the time the knowledge what to delete and what you mustn´t is growing. This "knowledge" should be included in these registry-cleaners as some kind of artificial intelligence to decide what level can be delete and what connections perhaps belong to other branches - and this seems to be too complicate. So they use the simpliest ways to decide what has to be deleted and throw their tools on the market - doing only half of the job that could possibly be done.

Only rather few entries in the Software-branch makes it possible to slowdown every registry-search/installation/deinstallation, the Startup-List and right-mouse-button-command-list.

Why?

The problem is that microsoft never intended to let so much programs install and write mostly useless shit cross-seeded in the registry. They never thought someone could use a running computer for more than two or three years without reinstalling or upgrading the operating system. It was easier for programmers to write/create some keys in the registry and not to use ini-files. This can also be done quite simple - but the registry functions are a perfect container for "hidden" informations that can not be changed as easily by normal users. If I could make a decision for further Windows-Versions I would ban the registry for anything else than the OS itself and force the programmers to split their informations in seperate and easy to clean/manage ini-files or at least reorganize the structure of the registry itself.

The best thing would be a combination with a virtualization-system like Altiris SVS that can split/bypass all informations of a program in a separate folder. If it only would do this with registry entries (it does this with every single file created by a virtualization layer ... this is good for temporary/testing purposes, but not for a simple registry-splitting) this could be a perfect solution/addon for the registry! There are some other flaws in this prog, but the direction seems to be the right one!
302
Living Room / Re: AVG anti spyware 7.5 Licence key
« Last post by Crush on December 18, 2007, 11:41 AM »
 ;D You don´t need a licence key if you can get it for free: http://free.grisoft....cts/us/frt/0?prd=asf
303
Living Room / Re: Korean scientists clone cats glowing in dark
« Last post by Crush on December 17, 2007, 12:15 PM »
.. they´re only glowing under ultraviolet light!
304
General Software Discussion / Get Divx Pro for free!
« Last post by Crush on December 10, 2007, 12:04 PM »
You can get Divx Pro for free here (only today)  :Thmbsup:
Install it and get the key by registering your E-Mail.
http://www.divx.com/dff/index.php
305
General Software Discussion / Re: Upgraded to 64-bit XP, need virtual CD/DVD drive
« Last post by Crush on November 25, 2007, 05:16 AM »
Hi, f0dder!

Why don´t you try Daemon-Tools V4.10 X64?   :D  (http://www.daemon-to...Category&catid=5) I cannot remember that the adware has been a problem - even if I tried to install it.
306
Found Deals and Discounts / Re: EverNote - Free today at GAOTD
« Last post by Crush on November 15, 2007, 07:02 PM »
I cannot see an activate.exe in my zip file? Do you have a different Download?
307
Found Deals and Discounts / Re: EverNote - Free today at GAOTD
« Last post by Crush on November 15, 2007, 03:26 PM »
Evernotes seems to activate using this Link in your Browser: http://www.giveawayo...m/evernote/?activate (I hope)

Till now I didn´t see clear what is stored this way. Perhaps nothing? It could be one of these two cookies or the certification file... In EverNotes itself I cannot find a damn hint if it is now registered or not - and the time is ticking. How can I see if my installation is now registered or not?!?!?!
308
GOE 2007 Challenge Downloads / Re: GOE 2007 Programming Contest for November 2007
« Last post by Crush on November 02, 2007, 06:05 PM »
Getting Organized is only a motto or an aim to come together for bigger projects?
309
Living Room / Re: The Ugliest Products in Tech History
« Last post by Crush on October 16, 2007, 11:47 AM »
I also have a VR-Helmet at home. It´s quite COOOL. USE YOUR HEAD AS A JOYSTICK! WOW!  ;D
!!!Don´t forget to look at the picture how to use it at the bottom of the text!!!
http://cgi.ebay.de/U...Z12831QQcmdZViewItem
310
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 16, 2007, 01:59 AM »
Aha, I´m not alone with my kind of project... cool ... I think this is a really useful type of software. It´s a pity that there have been quite less suggestions in my what-do-you-like-to-see-threads (not only at Donationcoder).

I spent very much time in trying out different approaches to maximum performance and analysis of existing/benching catalogers, code and Filesystems. There are also other screws to turn: Perhaps there are ways to make the structure even more flexible than normal a fixed structure. I´ve already experimented with some emulation-like dynamic recompiler that creates code from snippets, treating only these datas and parameters you really need, skipping unnecessary code like date/time/dir checking if not used for searches. I´m only thinking about how to use this in a simple way with different structures (especially for additional outsourced datas). A main reason of the design is to use several cpu-cores parallel that they even can do different search tasks returning different result lists and that the code could be used for other types of high speed/much information databases or also simple ones (movie, library, customer management and so on). To organize this and other planned features working optimal together also crushes my brain.
311
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 15, 2007, 04:37 PM »
It´s too simple and if you didn´t recursed you made all right! Changing the directories with FindFile() needs much more time than FindNextFile(). That´s all to know. So you step through all members of the actual directory and remember all Dir-entries in a search-array (you later look at their content). Then you do the same with all members within the array. That´s much faster than the iterative way if you find a directory. But this has a disadvantage: The calculation of the dir-sizes is more complicate.
312
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 15, 2007, 03:01 PM »
Hi, the coders are waking up now  :P ?
We left the theme of this thread by far I think. Perhaps we should open a new "coders" corner?

Thank U very much for so many hints and tricks. I really welcome each suggestion to improve the program and will think about it.

@Ralf Maximus
I was calculating the average by characters/enties of my HD content about 202 characters per entry - other not deep structures file-systems shouldn´t have this size. Perhaps between 50-100, I guess. Your´re right. I heard your optimisation clue very often before. I have enough experience to know where to look at to release real power.
The system is planned to be as flexible as possible. It should be possible to create special search databases containing only these informations you really want to look at or need. If you don´t like to have mp3 previews or big thumnails (only a special/maximum size) you should be able to create a database that contains only your entries, so that other unneeded datas don´t block space on your HD. This way makes it possible to create mini-databases that contains all you like on a very small USB-Stick for example. Only the main search-header will be the same. I also want to include hit-counters of previous often result-serving catalogs to optimize filecaching.

@F0dder
I often was told by "users" speed is not very important. I tried a lot of programs (see my big list) and found 99% of them not managing my personal needs. Nearly all preload the catalogs totally in memory, searching several catalogs without preloading them is not possible and self optimizing searches are non-existant. There are some heavy code-dependant-slowdowns and I would say every search, that needs more than 10-20 seconds is by far too long. I wanted to find a pretty smart way to offer speed and functionality, prefering speed myself. At the moment I can get about 40-50 million entries case independant searched with results on my rather slow and old laptop calculating the strings show a case independant substring search-power of 970.000.000 characters/second (ok, at the moment only ASCII - Unicode is also planned). So I can say it can exmine at least 2.000.000.000 chars/second with storing the results (with a quite old 3.2Ghz CPU). Others things like analyzing previous searches and building specialized speedup-hashtables will make another big performance explosion, I think. This is only for additional information of the momentary state. This search speed makes totally realtime searches while typing possible. Searching through my 60 GB HD with ~215.942 entries is ready in 4-5 milliseconds  :Thmbsup:!

I also thought about using pointers to the strings. There are several reasons why I don´t want to do this:
1.) Showing/Calculation directory structures need an intensive analysis of the full index
2.) You need a second file for effective adding new entries to the database and opening/closing files need much more time than packing all into one. Seeks are about 250-400 times faster than open/close commands (I also benched things like this :D).
3.) Such a sorted structure is only senseful for a perfect match search. You achieve even higher speeds with temporary hashtables. If I would do something similar I would prefer a radixtable.
4.) I only made this speed by implementing some special hashes in the mainblock of the entry. So I only have to test a rather small bunch of strings by comparisons and this I do with an ultra-fast selfmade Boyer-Moore-Crush :P search that - because of its algorithm - doesn´t needs to convert characters to upper/lower case to check them for a substring. So it´s no need to store normal & uppercase strings.
5.) Most searches normal people are starting are substring searches and no perfect matches - and this I do extremly fast. Therefore I also want to implement fuzzy-searches. For the few perfect-match searches I can create the temporary hashmap. Often you know the type/ending of strings (.dll/.exe/.doc or combinations of several like this) and this also needs a special treatment for faster access. The same has to be done for filters/triggers/aliases and so on.

The path is stored some entries before and, if it would really be needed, I could insert perhaps a reference to it. But each byte more slows down the memory-friendly search. If there is the need to sort the entries I will do it in extra tables on the fly.

I agree with "top-level fixed-size structures" (entering the club).

Regarding to your Indexing-System I´d like to say that all objects are managed by a reference-system. You can point to the file/fileposition, the memory with a single entry or at the full cached index with offset - different treatments needs different managing types.

Sorting would destroy some features as adding changes with time stamps. To access the entries with the right time would need a total reorganisation of the sorting. This also made the decision easier to do sorting with extra tables.

Performance/stats collecting is already planned.

The writecaching also was to write the results or their references as fast as possible to collect previous searches that you can call/analyze/compare against if needed. So your memory-usage can be minimized to nearly nothing.

Besides, my Filesystem-benching also helped to find a new way to scan directory structures on medias much faster then the "normal" way.

Perhaps I can use the binary search in another way as you ment - I´ll think about it. Some different search algos I was thinking of is bloomsearching,  multistring-searches and heuristic searching for similarities. The more different searchtypes I implement, the more I feel free for searching.
313
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 15, 2007, 06:15 AM »
You´re right! I´m working on a high-speed file-indexer for extremely large amount of datas. Especially for really big and fast medias as the FusionIO! One of it´s feature is the ability to choose the way it handles the information: Seeking the datas in small parts on disc remembering their position for the results, getting the datas cached in memory or insert the real datas from the reference together in memory for further works, so that you can decide the speed/memory usage/working speed on your own. This way it is possible to search/result/browse through millions of results with rather less memory.

According to this I made this 2 threads:
https://www.donation...dex.php?topic=7764.0
https://www.donation...dex.php?topic=7183.0 // infos about the cataloger itself. The result list is quite old - I´ve risen the speed up to more than twice!

The plugins should be able to write as much datas as they like, but only informational that can be searched for should go into the main searchbase, others in seperated datasets (files). They are used only as often as the filetype triggers their call. Some plugins should be able to add additional temporary datas to results and database from the internet.

I personally want to Index CD/DVD/HD/Network/FTP and/or HTTP (like a webspider).

The number of files in a single dataset can get up to several 100.000 entries. Big networks could deliver several millions.

I really like brainstorming these kind of optimazion scenarios
I also like this and made really a big head about all possible optimizations that could be done. Most of the things I found forced me to compromise something that has advantages & disadvantages in several ways - I had to weight them.

314
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 15, 2007, 01:51 AM »
It´s not possible to reference to strings at another position (this would lead to extensive file-seeking) - I want to be able to redirect directly to the files and the entries by offset to skip unneeded data and only have to access them if I want to analyze/visualize them for different conceptual reasons. My Quickfile class is preventing unnecessary slowdowns in the future - that´s enough for my needs. I only wondered why this performance problem isn´t remarked more often by others handling with big amount datas. The seeking is needed for each data block to write it´s length before (also for skipping sets faster) and to read single blocks directly from the file to memory without useless datas. Calculating the size before writing the block is nearly impossible, because additional informations can be woven into the blocks by plugins. Writing the block size also helps to rearrange/insert/cut/rebuild new files faster at changes. The Strings are the main search & sort criteria and so don´t should be outsourced in additional files or blocks to avoid many open/close/seeks.
315
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 14, 2007, 06:08 AM »
I know it´s faster to write bigger chunks - that´s the principle behind writecaching, the problem is: What, if the structures / objects consist of very much small datatypes and you have a huge amount to write?

Example:

struct fileobject
{
  char type;
  char attributes;
  UINT modificationdate;
  UINT flags;
  UINT strlen;
  string filename;
} fo;

Normally you define the output in a class and write each member of the strucuture one after each other. That´s also the functionallity archive classes serialize objects simliar as this:
void Fileobject::Write(CFile & out)
{
  out.write(&type, sizeof(type));
  out.write(&attributes, sizeof(attributes));
  out.write(&modificationdate, sizeof(modificationdate));
  out.write(&flags, sizeof(flags));
  out.write(&strlen, sizeof(strlen));
  out.write(&filename, strlen);
};

Often I have filestructures with several 100.000 objects or more (especially on HDs). The only way I could imagine to save this with the normal filesystem is to write static parts of the structure clustered this way:

outputFile.Write((char*)&fo, (char*)&fo->filename - (char*)&fo);
outputFile.Write((char*)&fo->filename, fo->strlen);

It´s possible and much faster than writing something byte by byte (this was only an example to show the system lacks), but not a very fine way to save datas, isn´t it? One problem still exists: The write-commands are depending on the size of the structure. Caching still is much faster than serializing. I first did it this way and wondered why some of my results needed sometimes 10 seconds or more only for writing a few Megabytes of datas. Often the times to build up the structure in memory by reading the directory-structure of partitions took less time (analyzing the dirs the internal XP-Cache was working very fast). This was the reason why I got into benching and thinking of the IO-speeds. As I said before: If you also have to seek() after each object somewhere else to write the complete object size and seek back to the end make out of a slow donky an even slower turtle. This is unfortunately what I am forced to do because of some features that need it!
316
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 13, 2007, 06:21 PM »
I have no idea how to reduce the usermode<>kernalmode switching without an own buffer. Do you have a simple solution?

Nevertheless, I´ll use my own caching-system in the future and don´t trust in the "normal" filesystem too much. Let´s next see how caching with VMem will work compared to normal memory.
317
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 13, 2007, 01:50 PM »
Here´s the kernalmode usage with normal IO (CFile normal in ~10 second bench):

Results for User Mode Process BENCHMARKER.EXE (PID = 3180)

    User Time                   = 1.73% of the Elapsed Time // the time used by the program itself is very small
    Kernel Time                 = 48.11% of the Elapsed Time // a rather big kernalmode usage!!!

                                  Total      Avg. Rate
    Page Faults          ,            0,         0/sec.
    I/O Read Operations  ,            0,         0/sec.
    I/O Write Operations ,      2779646,         268322/sec.
    I/O Other Operations ,            0,         0/sec.
    I/O Read Bytes       ,            0,         0/ I/O
    I/O Write Bytes      ,      2779646,         1/ I/O // the IO-Scanner shows that the bytes are written seperately with no cache!
    I/O Other Bytes      ,            0,         0/ I/O

OutputResults: ProcessModuleCount (Including Managed-Code JITs) = 22
Percentage in the following table is based on the Total Hits for this Process

Time   196 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
ntdll                                   117      10359    59 %      282363   // too much
kernel32                                 48      10359    24 %      115841   // too much
MFC80U                                   27      10359    13 %       65160   // too much
Benchmarker                               4      10359     2 %        9653 // ok


And here are the results with QuickFile (write caching in ~10 second bench)


Results for User Mode Process BENCHMARKER.EXE (PID = 4028)

    User Time                   = 19.27% of the Elapsed Time  // The main cpu time is spend for the program itself, that´s  fine
    Kernel Time                 = 1.79% of the Elapsed Time   // This is something I can accept

                                  Total      Avg. Rate
    Page Faults          ,            0,         0/sec.
    I/O Read Operations  ,            0,         0/sec.
    I/O Write Operations ,         2749,         274/sec.  // the caching leads to less hardware write actions
    I/O Other Operations ,            0,         0/sec.
    I/O Read Bytes       ,            0,         0/ I/O
    I/O Write Bytes      ,    180158464,         65536/ I/O  // here you see my standard IO-cacheblock size (0x10000)
    I/O Other Bytes      ,            0,         0/ I/O

Time   1576 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
Benchmarker                             961      10015    60 %     2398901   // great! most time is spend to create the cache!
MSVCR80                                 614      10015    38 %     1532700   // I think this cpu time is used by the CFile-Class itself
ntdll                                     1      10015     0 %        2496           // this is acceptable  :D

This shows that writecache releases the kernal32 & ntdll - there is definately some kinds of caches are active in WinXP with ntfs, but its not very effective in write actions with small portions. The coders of M$ perhaps concentrated in optimizing read-caching more than writing. I´d like to know how Linux filesystems would perform in such a test.
318
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 13, 2007, 10:37 AM »
@f0dder
No, I´m not doing for each byte an open/close ... only 1 time open before writing the bytes and 1 close after the loop. I ment the Close()-Command is also within the time mesure loop to ensure that the datas have been written to HD. It would be silly to do something like this  :)

My system can only handle about 150 file open/close tasks a second. Writing 100.000 bytes with it would take over 11 minutes this way!

The Write() surely uses the system-caching as standard, because there are some commands at opening that can turn off the cache buffer. The files I´m creating don´t contain any alternate data streams. Turning off the cache is an extra command for the one´s using the "normal" file-functions without any knowledge.

But it was a good hint to check the filesize: I didn´t write 1.000.000 or 100.000 bytes/ints - it have been 0x1000000 and 0x100000 = 16xmore datas
This lead to a throughput of 43.115.789 Bytes/s with QuickFile ... that´s what I expected first. This also means the normal file system can only write 56802 Bytes/s what is extremly disappointing. But I can now forget the remark of the last post about never reaching the maximum throughput.

For the mentioned test I didn´t use seek() and only wrote characters as fast as possible.

Perhaps you´d like to see the main loop then you will understand that I don´t use any dirty tricks:

CFile ff(_T("E:\\x.y"), CFile::modeCreate|CFile::modeWrite|CFile::shareExclusive|CFile::typeBinary|CFile::osSequentialScan);
 char num = 0;
 
 QueryPerformanceCounter((LARGE_INTEGER*) &nc1); // I use the high resolution performance-counter

// the main loop
// oh, I see that I used hexadecimal 0x100000 = 1.048.576 Bytes, sorry  :-[ but this had no influence till now I only compared CFile & QuickFile timings
 for (int x=0; x< 0x100000; x++)
   ff.Write(&num,sizeof(num));
 ff.Close();

 QueryPerformanceCounter((LARGE_INTEGER*) &nc2); // stopping the timer to calculate the used time
319
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 13, 2007, 08:16 AM »
Please keep in mind regarding to the speeds that I was talking only of 4 Megabytes and reduce the size in the following test to only 100 Kilobytes!

The objects are from 16 up to 40-50 bytes in avarage, I guess. The most parts of the class elements are 2-4 bytes in size. I now wanted to repeat the test with bigger elements and have to admit that I also wrote 4-Byte-sized integers during the upper mentioned test. Now I tested it again with
100.000 ints (4byte)
Normal: (first run 22.7 s, second run 18.7 s -> perhaps here is the OS-writebuffer slightly visible?)
Quickfile: 0.03145 s.

Then the same test with 100.000 bytes as I wanted first:
Normal: 1.) 18.46 s 2.) 18.52 s
Quickfile: 1.) 0.02432 s and 2.) 0.02442

I think you´ll look the same way as I did first. The First test is 721 times faster with write cache and the second 759 times.To assure that all datas have been written I included the Close() of the file in the test loop. I don´t use superspecial IO-Routines: My class is derived from CFile directly and only adds the write cache - so it is sure that the same routines for writing are used.

The results show how the OS-Buffer works: It only caches the access to the HD tracks and sectors of the file not collecting the given datas intelligently!

A very interesting thing is that there seems to be even more potential. I calculate a throughput of 4.111.842 Bytes/s and HD Tach 3 benchmark stated an average speed of 23.6 to 35.1 MB/s. Ok, the file-system and OS needs some time, but is it sooo much that I slowdown to 1/6 of the possible speed? The more things I try out the more I believe that most software is not optimal using the hardware.

Later tests and benches with file reading and directory structure analysis led to similar results. If there is a caching system - it doesn´t gives you the full power of access as it could be able to do! I implemented some new caching features & hacks and the overall speed is much much higher than in the beginning.

Something like FusionIO will be standard for normal users in 4-5 years or even faster.
320
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 13, 2007, 02:47 AM »
I don´t switch of the cache writing the file with Bios-interrupts or something like that. I´m using CFile and CStdioFile and I also  thought there should be a rather good OS-writecache in the background. The first time I remarked this behaviour was serializing bigger lists of simple data with CArchive and this was even slower than the normal CStdioFile::Write() function. After creating lists of 7-10 MB size I couldn´t believe that my HD (in my 3 year old HP-Laptop) is that slow. Benchmark results (with HD Tach 3) showed a throughput 35.1 MB/s maximum & 23.6 MB/s average read speed.

This wasn´t all:
My prog needed to rewrite after each object-block the size of its data for several reasons and the simpliest way seemed to be checking the filepointer before and after writing an object to calculate its length and to write it to the blockstart with seeking to the position, overwriting the placeholder and seek back to the end of the file to repeat this task. The standard seek() was extremly slow compared to seek within the writecache that only writes to the HD at overflowing or forcing it to flush.

I also think that caching data´s like sound or gfx isn´t quite reasonable. I´m working with directory/file-informations like FARR and a good caching system in the background makes searches turbofast - especially if your code is much faster than the file access. (btw: others in the FARR-threads also cried for a searchcache)

It would be a pity if the transfer rates of future drives like Fusion-io (http://www.techworld...dex.cfm?newsid=10210) that can reach 600 MB/s couldn´t be used as optimal as possible by slow or non-existing caching-systems & code.
321
Living Room / Re: Use video RAM as a swap disk?
« Last post by Crush on October 12, 2007, 06:30 PM »
I´ll test in the next weeks how good GFX-Memory can perform as a cache with my project compared to the "normal" memory. In the last weeks I examined the read/write behaviour of harddiscs and what can be done to get my tasks to maximum speed. The result was really shocking for me: Caching is a really fine thing if it´s done right. Especially write-Caching can boost the speed of programs extremely. As example: Writing out 1 million bytes with standard IO-write routines one after the other (no caching) took 119 Seconds. The same task with a simple homebrewn caching-system and only 64K write-cache speeded this up to 2.4 seconds! Read-Caching is much much better - no comparison to write-caching - every byte you can use for this is gold. I never thought about using GFX-Mem this way ... I´ll see.
323
Clipboard Help+Spell / Re: A new name
« Last post by Crush on October 09, 2007, 01:41 AM »
ClipClap?
Slip (Spell & Clip)  :D
324
Living Room / Re: KenR's health and situation
« Last post by Crush on August 29, 2007, 07:57 PM »
Hi KenR! You are a very pleasant DC-Member and I don´t want to miss your posts and reviews. A friend of mine had an accident and since then he got very bad pains in his spinach, so that some parts of it had to be ankylosed. This only helped a rather short time and he now must take very often morphine. Perhaps you´ll have more luck. I wish you the best and hope that the surgery will help you fast and durable.
325
General Software Discussion / Re: Software to record in MP3 format
« Last post by Crush on August 06, 2007, 09:50 AM »
Pages: prev1 ... 8 9 10 11 12 [13] 14 15 16 17next