topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Tuesday April 23, 2024, 1:05 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - superticker [ switch to compact view ]

Pages: prev1 2 [3] 4 5 6next
51
... When you change a tree node text item from the main screen, it assumes this is something you just want to change temporarily.  i.e. it *assumes* that by changing it here and not in tree configuration screen that you do *not* want to save it, that's why it doesn't prompt you....

One way to solve the problem is add an option about whether changes made on main screen should be treated as changes to the tree that presumably should be saved (and thus user will be prompted if they try to change tree or exit without saving them).
I'm trying to decide if adding another configuration option is more or less confusing than simply providing two Quit choices: (1) Save and Quit, or (2) Quit w/o Saving.  Somehow I think the latter approach is more intuitive than adding another config option.

52
... [To create a temporary comment in a note] just design your configuration tree as normal and save it, then when composing a letter, select the node you want to edit, and edit it in the top box above.  Changes in this top box are just temporary and last for the duration of the current letter only.
Now I understand why there isn't an automatic save feature on The Form Letter Machine when quitting.  In order to give the user the choice of (1) saving or (2) not saving upon quitting the program, I would create two mutually exclusive Quit choices: (1) Save & Quit and (2) Quit w/o Saving.

With the single Quit choice, I'm constantly loosing my changes because I forgot to save them before quitting.

53
The Form Letter Machine / Auto saving tree and variable files
« on: June 16, 2007, 12:48 PM »
... the protection feature that warns you that you're closing the app without saving the tree sometimes forgets to warn you.

let me fix this, and then decide whether auto saving tree changes should be automatic.

In practice, one does more saving than aborting of changes, so in this sense, auto saving trees and variables on program exit makes sense.  I have an accounting program (Simply Accounting by ACTPAC, excellent product) that works by auto saving.  At first I was concerned, but as long as the File > Revert option is available, it hasn't been a problem, and I have never lost a thing.  I rarely even open the File menu on my accounting program.

There is an issue that when you switch trees, you'll need to auto save on the switch.  That means if you switch back to the original tree and want to revert, you can't (unless the program saves *.bak files for trees).  However, I'm using several tree files and I've never experienced a case where I wanted to do that.  And with the present implementation, doing this maneuver would still be a problem.

54
The Form Letter Machine / Re: Making command line arguments work
« on: June 16, 2007, 11:38 AM »
Can you try this new build:
https://www.donation...ormLetterMachine.exe

(It seems that in the last version ... the -out=stdout option [was being forced])

I just tried it and it works.  But there's a new issue.  If I close the app dialog box (instead of using the [close] button), TFLM writes a blank output file.  I don't think any app should ever write a blank file.  I would suggest:

  • If the application window is closed, treat that as pushing the [cancel] button so that nothing is written--not even a blank file.

  • Change the name of the [close] button to [Output & close].  Honestly, I didn't even understand the real function of the [close] button until I read this forum.

I might also add that the protection feature that warns you that you're closing the app without saving the tree sometimes forgets to warn you. :(  I'm wondering if the save should just be automatic on program exit, in which case the "File > Save tree" menu item should be replaced by a "File > Revert changes" to restore (reload w/o saving) the previous tree?

55
This is an old issue brought up before in the Bug Tracker, but here it goes again.

If on the Start In: field of TFLM shortcut, I redirect the data files to be stored in
%HOMEDRIVE%%HOMEPATH%\TheFormLetterMachine
directory, the application ignores that and still puts them into the %ProgramFiles%\TheFormLetterMachine directory just the same.

Honestly, for Windows 2K, XP, & Vista compatibility, the app should be putting them in the %APPDATA%\TheFormLetterMachine directory by default, which would work for me.  The Start In: field still should be able to override this default.  (I thought the MS foundation class libraries behaved this way by default, but I may be wrong.)

On an unrelated note, I'm bring this up now because I had to recently lock down my machine(s) to prevent innocent-looking spam URLs from indirectly linking to exe files that try to download and install on my system.  Clearly, this is a new trick by the spammers to install back doors on machines.  But I now have to grant access exceptions to directories like %ProgramFiles%\TheFormLetterMachine and "%ProgramFiles%\Clipboard Help+Spell" to circumvent this security to get these programs to still work.

56
The Form Letter Machine / Re: Making command line arguments work
« on: June 15, 2007, 11:08 AM »
... try dropping the double quotes altogether from the -out parameter.  Since there are no spaces in the filename, they shouldn't be required.
-Boxer Software (June 15, 2007, 10:45 AM)

Yes, but there's a colon in "C:\" that needs the double quotes.  I tried getting around that with symbolic variables (see earlier reply), but that hasn't worked either.

57
The Form Letter Machine / Re: Making command line arguments work
« on: June 15, 2007, 11:04 AM »
My guess would be that it should read:
"C:\Program Files\TheFormLetterMachine\TheFormLetterMachine.exe" -out="C:\users\mehl\announcement.07.txt"
I moved the " from before the -out to before the C:\ at the end.

I agree.  Thanks for the correction.  But it still doesn't work.  I also tried removing the double quotes altogether using symbolic variables:

%ProgramFiles%\TheFormLetterMachine\TheFormLetterMachine.exe -out=%HOMEDRIVE%%HOMEPATH%\announcement.07.txt
But that's not working either.  Strange.

58
The Form Letter Machine / Making command line arguments work
« on: June 15, 2007, 08:58 AM »
I setup a Windows shortcut with the following command line parameters:

"C:\Program Files\TheFormLetterMachine\TheFormLetterMachine.exe" "-out=C:\users\mehl\announcement.07.txt"

However, when I press the [close] button on The Form Letter Machine, the anticipated announcement.07.txt output file is never generated.  What am I doing wrong?  I'm running v1.04.01.

59
Clipboard Help+Spell / Re: Memory Usage
« on: June 14, 2007, 12:20 PM »
I will add a feature i add to most of my programs that let's you choose whether you want them to release their working memory when minimized.

I think it's standard practice to have Windows apps release their memory (like for their GUI) in the free-memory pool when they're minimized and the zero-page pool if they never need that memory again.  Understand, the first pool is for temporarily releasing memory that otherwise needs to be reclaimed when restoring the app to the desktop.  The idea is that if no other programs need that space, then those pages remain available to the app that originally released them when it gets restored; (otherwise, those free pages can made available to external apps instead of calling the swapper).

The zero page pool is for permanently releasing memory (never to be reclaimed in the future).  The OS immediately writes zeros in these pages (to prevent browsing) so they can be immediately allocated to unrelated processes.

60
The Form Letter Machine / Re: Variable Management (Requests)
« on: June 14, 2007, 10:14 AM »
4 modifications / features ... for configuration trees with many variables.
  • Only display the variables used in the currently activated portions of the tree. For example, if my currently selected check box items and radio button items make no use of the %MeetingDate% variable, the variable should be hidden.
-the3seashells (April 09, 2007, 03:29 PM)
This is a good feature, but there needs to be some way to disable it so you can locate discontinued variables and remove them.

  • Allow variables to be clicked on and filled in as in a form in a word document. When looking at the entire document (the bottom left area where the entire letter is shown), it would be much easier to type in the information there than hunt for, and then update the variable from the variable list.

  • Clearly mark unfilled variables in the ‘completed letter preview’ area. I find myself spending the majority of my post TFLM proofing time checking to make sure I have not forgotten to included / replace all of the needed variable names.
-the3seashells (April 09, 2007, 03:29 PM)
Well, there are really two distinct types of variables: (1) predefined variables and (2) fill-in variables.  They both need to be handled differently.  The TFLM currently handles the first type well.

Firefox handles fill-in variables by making suggestions as you type in the fill-in field.  For filling in my name and address on web forms, I like Firefox's method.  Firefox also has a method of deleting obsolete fill-in suggestions, but it requires a cryptic hotkey combination I can never remember.  I also like this delete-suggestion feature, but I wish it was done with a right-click context-menu item instead of a hotkey combination.

61
Living Room / Re: adding SATA to a non-SATA motherboard
« on: December 23, 2006, 12:58 PM »
if you have a 5yr old motherboard, perhaps the answer is to buy a new motherboard (or new computer?).  I realize that's easy to say and not so easy to afford, but at some point trying to upgrade these things is more trouble than its worth..
Argh, I think you're right.
I agree.  If you're going to buy a new disk, I would indeed get a SATA drive.  But in addition to that, I would replace my motherboard with one which had a SATA interface as well as a PCI Express graphics interface.  That means buying a new PCI Express graphics adaptor that's compatible with Vista.  You can still keep Windows XP, although if you plan to upgrade to Vista, I would do so when you replace the disk drive.

The only thing that is important and lacking from my system right now is a robust backup method, which is why I'm splurging on the two 500GB hard drives.
Now that's a whole different issue.  If you're looking for a backup disk, I would buy an external drive with an "eSATA" interface.  This will have a little different connector than the standard "internal" SATA connector to facilitate better shielding for longer cable length.

When buying an external drive, if you're a computer professional, I wouldn't fool around.  Get something like the G-Drive, which has everything: Aluminum case, no fan, eSATA, 400 & 800MHz Firewire, and USB 2.0.  Of course, the eSATA connection will be the fastest followed by the Firewire connections.  Here's an example source: http://www.academicsuperstore.com/market/marketdisp.html?PartNo=752728

If you're not a computer professional, getting an external drive with only an eSATA and USB interface would be good enough.  If it's a high performance disk (e.g. it gets hot), I would still shoot for the aluminum case w/o the internal fan.  Slow (cool) drives can be put in plastic cases.

The G-Drive I mentioned above allows air to circulate both above and below it so it's cooled on both sides.  That's what you want to shop for if you're buying a high performace (runs hot) drive.  Most people may be happier with a slower, cooler-running drive though.

62
Must be some extensions playing tricks, mouser.  FireFox rendering does seem a slightly tiny bit slower than IE, but not much,
I agree.  My FF 1.5.0.8 is very fast.  You never told us whether this slowness was due to excess page faults (most likely!), CPU time, I/O waits, or network slowness.  Can you fire up Procmon (Process Monitor from System Internals) and give us a screen shot of the Performance tab results for Firefox?  (How do you post a screen shot on this forum anyway?  I would post my Performance tab, but I don't know how.)

For my memory working set (on Firefox), I'm seeing 81,000K.  Yours will be somewhat higher if you have some extensions that hog memory.  If your working set is really large, then tell us more about your Page Fault delta (which will cause I/O waits).

You may be able to tweak some parameters in Firefox to reduce Page Faults, but you'll probably need to buy more memory chips when you do.  Uninstalling excess extensions may be better alternative.

63
Concerning Fineprint, is it possible to configure it to print 12-up as well?  I'm asking because I sometimes design tickets for events (like dances), and I want to print 12-up and 16-up on a page.

Does anyone remember DynoPage on the Mac?  One nice thing about DynoPage is that you could design your own layouts and margins for these different layouts.  This let you use non-standard paper sizes and these new paper sizes would show up in your Page Setup screen for any particular program like PageMaker.  DynoPage didn't have a 16-up "tickets" layout, so I designed one for it complete with cut marks for dicing the tickets.

DynoPage was also smart.  It did much of its manipulation within the PostScript interpreter of the printer.  Of course, you would have to be using a PostScript printer to take advantage on this.  As an example, you can page lock a watermark in a PostScript level 2 (or 3) printer so it prints out on every page automatically.

I often thought about learning PostScript but I really don't have time to learn yet another programming language.  I wish I did know it though.

64
Developer's Corner / Web development tools that integrate with Firefox
« on: December 10, 2006, 03:25 PM »
Here's a website that lists all the great web developer tools, including FireBug and the Web Developer Toolbar.  This site is also discussed in my StumbleUpon blog (linked on my DonationCoder profile).

http://cyber-knowledge.net/blog/2006/10/05/20-firefox-extensions-that-every-web-designer-should-know-about/

65
I have strongly discouraged users from formatting their USB flash drives with NTFS directories if they are taking them outside their Windows domain for fear it might create ownership problems down the road.
-superticker

Not a good suggestion though - if somebody formats his drive as FAT, he'll be in a nasty situation once he's filled up some 100GB and need a file that's >4GB large :) (but okay, at least there's transparent conversion to NTFS with "convert.exe").

Actually, all USB flash disks (and ZIP disks) are shipped formatted as FAT.  SanDisk uses FAT16, and that would limit the flash volume to 4 GBytes.  I guess I don't know why SanDisk doesn't use FAT32.  Does anyone know?

The problem is that users want to convert their new USB flash drive from FAT to NTFS, and that's what I'm discouraging until I figure out how foreign NTFS file ownership would be handled between office (central domain controller) and home (foreign host).

66
Windows stores the EFS encryption key, encrypted, in the registry... for domain logons, I assume it's stored on the domain controller. For non-domain machines, you'll probably need to make sure that all machines have the right credentials, and perhaps SIDs as well.

The problem with NTFS files is they have the concept of "ownership" attached to them.  If that ownership is attached by a central authority (domain controller), then switching disks among domain members shouldn't be a problem.  But when you mount a "foreign volume" from outside the central authority, then who owns these files?  ...the Default User?

Should you even be able to mount a foreign volume?  If so, then who takes ownership of the Default User's files?  In this weird case, I "think" the Default User would be the local administrator since the creator of the original domain account to which these lost files once belonged to would not be available on a foreign, non-member host.  The other possibility is that there is no defined Default User; therefore, you can't mount the foreign volume.

I have strongly discouraged users from formatting their USB flash drives with NTFS directories if they are taking them outside their Windows domain for fear it might create ownership problems down the road.  Even if those flash drive files are owned by the Everyone group, it's still the Everyone group for that specific domain, not the entire Windows world.

If there is a safe approach for defining NTFS ownership on portable (foreign) disk volumes, could someone step forward and explain this?  For security reasons, I don't like users using FAT volumes, but for portable disks, I'm not sure how to get NTFS ownership to work.

67
If you use NTFS on the portable drive, you can use Windows' built-in EFS encryption.... Doesn't work on "Home" editions of XP, though.
This comment brings up a question I have about NTFS file security.  If I move an NTFS disk between two Windows Pro machines belonging to the same domain (and using the same enterprise license key for Windows Pro), the encrypted files should be okay (if they're authenticated with the same domain controllers), right?

What if I move an NTFS disk to another Windows Pro system that's part of a different authority (different domain or difference license key)?  Won't--or shouldn't--those encrypted files be unreadable?  Or am I missing something here?

Will you even be able to mount an NTFS volume that comes from a foreign domain (or license key)?  My understanding is that foreign NTFS volumes present mounting problems, especially when they don't have Everyone read/write access.  Does someone know a reference that discusses this more?

Some backup software (like Paragon) lets you change the volume SID on an NTFS disk, but I always thought you had to decrypt all files before doing so or bad things would happen.

68
General Software Discussion / Re: Windows memory-paging behavior
« on: November 21, 2006, 11:40 AM »
This helped me understand a bit more about windows memory handling: http://shsc.info/WindowsMemoryManagement
That's an excellent description about the Windows memory-paging system.  The important thing I learned is that an application's active virtual working-set (of memory) is automatically reduced when that application is minimized.  Moreover, its unneeded virtual pages are returned to the "standby" page pool where they remain intact but are up for grabs.  It's much like moving a file into the trash.  It's still there (but up for grabs if needed).  When the minimized application is restored, those same intact standby pages are placed back into service without creating a page fault to disk.  Very nifty.

This also explains why applications page while idle when there's plenty of memory.  They're simply returning pages they won't need for a while to the standby page pool just in case other applications do need them.  This cooperative memory management insures available pages will be there if needed.

My only gripe is that a color-coded memory-map diagram of all the paging areas (free page, zero page, and standby page) and RAM memory areas is not included so one can see how everything overlaps at a glance.  You basically have to draw this diagram yourself to follow the article.

69
Post New Requests Here / Re: IDEA: TV tuner software
« on: November 20, 2006, 06:49 PM »
So the problem is with the programmer's implementation of ATI's MMC video recording software, not the design of Windows.  Interesting.  Maybe I need to look at getting new video recording software rather than changing my disk hardware configuration.
-superticker
Well, you can't always know beforehand how large the file needs to be. But you can reserve some minimum size, and then make sure you write largish buffers.
Yes, but you know the recorded programs will have 30-minute quantum sizes and you can compute the required contiguous blocks from that reliably plus a little extra.  Then you can release the little extra if you don't need it.  That's how MVS (IBM main-frame OS) typically does it.

If you have a partition that's mostly used for large files, you can reduce fragmentation by increasing the cluster size. If you almost solely use it for video editing purposes, you might want to try going as high as 64KB cluster size.
I seriously thought about doing exactly that, and if I knew fragmentation would be a serious problem, I would have done that when installing ATI's MMC recording software.  But the better solution would be to find a smarter video writing application that preallocates extents properly.

Using the right cluster size is an issue.  You need to fit about 5 clusters on the disk's on-disk cache and you want to have about 5-8 files open at a time, so 5x8=40 clusters needs to fit on the disk's on-disk cache.  Unfortunately, a standard non-media-server disk has a smaller on-disk cache.  That limits your cluster size.

Many server-oriented RAID controllers lets you specify a look-ahead buffer size for your sequential-file media disk arrays.  The problem is that this setting affects every disk on that RAID system.  As a result, you have to place your media server disks on an independent RAID controller from your system device, which I don't like.

But my home entertainment system doesn't use RAID.  That's just too much.  I want to keep my home system as simple as possible.  When I replace its disk, I'm going to create a separate large-cluster-size partition as you suggested just for recorded programming.

70
Post New Requests Here / Re: IDEA: TV tuner software
« on: November 20, 2006, 10:16 AM »
Doesn't the Win32 API support disk block preallocations?

SetFilePointer() to the expected output file size, SetEndOfFile(), and SetFilePointer() back to offset 0. Presto, instant action on NTFS, and as few fragments as possible.
Thanks.  I'm assuming these calls will work for sequential access files (as well as random block access).

So the problem is with the programmer's implementation of ATI's MMC video recording software, not the design of Windows.  Interesting.  Maybe I need to look at getting new video recording software rather than changing my disk hardware configuration.

What bothers me most is that the 2000 extents for a one hour (1 GByte) program are scattered all over the place on the drive.  The resulting slow response causes about 1% of the frames to be dropped even on moderate compression modes.  One would think the video recording programmers would have preallocated each file extent in 15- or 30-minute program segments (256 MByte contiguous blocks) to reduce file fragmentation, but they didn't.

Is it fair to assume most other video recording software preallocates large extents (contiguous blocks) to reduce fragmentation and speed disk access as one would expect?

71
Post New Requests Here / Re: IDEA: TV tuner software
« on: November 19, 2006, 11:58 AM »
Being new to the TV tuner world in 2002, I was very concerned placing a TV tuner card on the PCI bus would eat up all the bus bandwidth since the tuner card would be constantly sending data to the video card.  Not wanting to take a chance, I bought an ATI 8500DV which is a combined video and tuner card so the PCI bus stays out of the loop.

I've been happy with my ATI 8500DV tuner/video card over the years, although a real TV tuner may have slightly better FM sensitivity.  But I always wondered if having separate tuner and video cards would really slow the PCI bus down as much as I feared?  And now with the PCI Express bus, is this bus contention even that much of an issue today?

For my next video card, I've looking for a dual-tuner PCI Express bus card.  Again, I favor putting the tuner and video card on the same card if possible.

---------

I definitely find the ATI MMC video recording software to be slow.  The main reason for this is because it fails to preallocate file extensions appropriately.  Doesn't the Win32 API support disk block preallocations?  Gee, a recorded program often has 2000 file extents--that's ridiculous for sequential file access!  Moreover, the cluster size on my disk is only 4KBytes.

Okay, I did think about buying a second drive that's SCSI (not parallel EIDE) and formatting it to a media-server cluster size of 2MBytes or so.  Is that what everyone else does?  Is a media server configuration typically used for "home entertainment" PCs?  Do people also buy media-server hard drives with 128MByte on-disk caches for their large sequential access needs?  What cluster sizes are you formatting these media beasts with?

I just wish the ATI MMC video recorder would better preallocate it's file extents.  2000 extents on a single sequential file is just plan ludicrous.  Something is wrong here.

72
my motherboard decided to help me solve this dilemma by not working..
so now I'm off to buy a sound card..  :tellme:
You never said whether or not it's a hardware or software problem.  95% of the time it's a software problem.  Open the Sound control panel and click the hardware tab.  Be sure all the codec kits that should be there are there.  If you're in doubt, reinstall the kits from the CD that came with your motherboard.  (You can also check the registry if you know what you're looking for.)

Moreover, open up each kit and verify all its driver components are running right.  Double click on driver details to see their properties.  See if all the drivers (*.sys) are signed.  Although they may not have been signed originally, there are probably signed versions available by now.  If you think there's driver corruption and you haven't ran chkdsk recently, do it now.  Any disk with more than 7% bad sectors should be discarded; it's got mechanical problems.

If it really is a hardware problem, chances are you'll get a little something out of it--maybe a crackle.

73
General Software Discussion / Re: My favorite software! What's yours?
« on: November 15, 2006, 02:07 PM »
Courier (email)
I'm glad to see someone mentioned the Courier 3.5 e-mail client.  What makes Courier really good is not the client itself (although it's very good), but that it integrates with the Time & Chaos 6.0 PIM.  T&C is an excellent PIM.  There are many user definable fields, it syncs to your PDA (either Palm or Pocket PC), and it even exports its database in XML format.

Courier can add e-mail addresses to T&C and vice versa.  Chaos Software is currently selling Time & Chaos 7.0, but currently only the T&C version 6 database is compatible with Courier 3.5.

Courier 3.5 is an excellent e-mail client in its own right, but users that get lots of spam and know how to write strong regexp's will like it best with its powerful filtering capacity.  You can even have its filters color code your e-mail titles.

Courier's down side is its (1) poor handling of default and foreign fonts, (2) poor IMAP4 support, (3) poor template support, and (4) it won't reflow quoted text in replies.  Many other e-mail clients aren't much better.  TheBat! doesn't suffer from these four flaws, but it won't integrate with a powerful PIM either as Courier can.

74
General Software Discussion / Windows memory-paging behavior
« on: November 12, 2006, 11:46 AM »
I'm hoping this is the right place for OS (computer science) discussions.  This is continued from a thread about Windows drivers.

Windows NT doesn't do "swapping", it does "paging" - ie., it swaps individual pages in and out, instead of full processes.

There are times when Windows gets dog slow when it's running out of physical memory for two applications that want to run.  It's almost as if both applications must entirely fit in physical memory to make Windows work.  Now VAX/VMS would have been smart enough to page both processes successfully, but not Windows.  I also think Windows does way too much paging.  I got lots of memory available, yet Windows is always paging IExplorer.  What's the point of that?

75
Developer's Corner / Re: Real-time OS drivers and their scheduling
« on: November 12, 2006, 11:32 AM »
NT schedules strictly at thread granularity, no consideration is given to what process the thread belongs to. - so yes, an app with 10 threads could "starve" an app with 2 threads. So you need to do responsible design; don't have a bunch of CPU-intensive threads on a single-CPU system. Having multiple threads isn't a problem as long as most of the threads are blocking for events,...
And I think I'm okay with that design because it simplifies the scheduler, which means it works faster.  The only downside I see is if someone wrote some malware (like a packet sniffer) with lots of threads that would create a denial-of-service effect on your computer.  But that would bring your attention to the malware, and the malware designer wouldn't want to do that.

In most OSes, everything in "kernel mode" (which includes the drivers and the kernel/monitor) are mapped together such that execution can move from one place to another without the overhead a protection switch....  Does Windows work the same way?
Yup, everything kernel-mode is basically lumped together in the high part of the address space (upper 2GB, unless a boot.ini switch is added to make the split 3GB for usermode and 1GB for kernel mode).

That brings up a new topic.  Is it possible to have more than 4GBytes of memory in Windows and still use the 32-bit version of the OS?  For example, could you mapped one 4GB (32-bit) block for just the kernel (not that you would really need/want to) and the other 4 GB block for the user mode so that you have a 8GB machine running 32-bit Windows?  I realize this design may not be ideal, but is it possible?  (Go ahead and move this new topic to another thread if the answer requires some discussion.  Maybe it won't.)

Well, you run out of MMU register pairs for each separately protected module that must be constantly mapped into memory.  How many MMU mapping registers does the Pentium processor have?
x86 doesn't work that way :)

You have a register (CR3) that points to a page table (physical memory address). Each process has it's own CR3 value. The page table is a multi-level hierarchy that maps linear->physical addresses, including some additional info like access rights (since pages are 4k and must start at 4k boundaries, there's 12 bits for per-page stuff).
I'm trying to decide as long as these page tables stay in L1 cache, if this is an acceptable solution?  My initial thinking is that if you had dedicated mapping registers (without any memory contingencies between processor, address decoder, and address mapper), you could have more parallel operations (instruction fetching & effective address compution).  But, in truth, some pipelining and segmenting of the L1 cache could be used to avoid this potential conflict.

My only comment is, as you make you pipeline longer, you suffer more pentalities (like pipeline refilling) on branch instructions.  I do know the Pentium does look-ahead address compution on branch instructions--and maybe it needs to for this reason.

I guess I favor the dedicated register design for the MMU.  It's cleaner and you don't have to worry about several subunits fighting over the same L1 cache for their parallel activities.  You could set aside (segment) part of the L1 cache for address mapping info, but then you would have a messy form of the dedicated MMU register design.

NT doesn't do "swapping", it does "paging" - ie., it swaps individual pages in and out, instead of full processes.

There's a followup question on this at https://www.donation...index.php?topic=6142

Pages: prev1 2 [3] 4 5 6next