topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday March 28, 2024, 9:39 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - normeus [ switch to compact view ]

Pages: [1]
1
I have to transfer data from a database format called "neoaccess database technology" from 1995 which was used on a product called quickImage. The database format is "object based" so using my hex editor changing a bite on one field at a time an comparing results is getting me somewhere at a snails pace. I figured it wouldnt hurt to ask if any one had done something similar and might have code to share ( C, C++, VB, Perl, PHP, Objective-C, or anything at this point  ) where I could at least figure out why these record pointers are so random.

like offset 13F:  2 bites represent the number of records which can be a max of 0xFFFF

the first record pointer is located at 0x55BA

Thank you if you have any answers for me or even for any comments you might have.


2
Does anyone know of a good parser for wikipedia content? I don't want to write a full parser if I don't have to.
The parsing shouldn't be too bad in perl ( or any language with regex ) but I feel there should be a program since this sounds like something a lot of people would do.

export a file from wikipedia: (I would use a random animator name as an example "Craig_Clark")
wikipedia export

then from the xml page I would only use the text part
  <text xml:space="preserve" bytes="3618"> text text 3618 bytes of text </text>
but what I need to do is convert the wiki text (ex:  [[animator]]) to regular text (ex: animator).
anyway thanks for your comments.

Pages: [1]