Latest posts of: IainB -
Welcome Guest.   Make a donation to an author on the site May 25, 2015, 08:43:15 PM  *

Please login or register.
Or did you miss your validation email?

Login with username and password (forgot your password?)
Why not become a lifetime supporting member of the site with a one-time donation of any amount? Your donation entitles you to a ton of additional benefits, including access to exclusive discounts and downloads, the ability to enter monthly free software drawings, and a single non-expiring license key for all of our programs.

You must sign up here before you can post and access some areas of the site. Registration is totally free and confidential.
Check out and download the GOE 2007 Freeware Challenge productivity tools.
  Forum Home Thread Marks Chat! Downloads Search Login Register  
  Show Posts
      View this member's profile 
      donate to someone Donate to this member 
Pages: Prev 1 2 3 4 5 [6] 7 8 9 10 11 ... 176 Next
126 Software / Find And Run Robot / Re: FARR as bookmark manager? on: April 14, 2015, 05:40:37 AM
You could save bookmarks as .url files on your computer and put tags in the url file names.
FARR and FARR aliases can then be used to search among those bookmarks.
Long ago I made the tool Tourl to do just that.

Two caveats:
  • 1. to be able to search not only the title of the bookmarked page but also its url you need to put (parts of) the url in the filename. Tourl has hotkeys for doing that.
  • 2. Tourl is old, maybe some parts of the code need an update to run with the latest version of Autohotkey. But you can try it out and the source is included in the download.

This comment caught my attention because it referred to some of the things (including capture and linking of content, keywords and bookmark/URL) that I was after in this request: Feature request: select/display Grid column data > horizontal rows in Memo pane. (Though I did not specifically state in that post what my requirements were.)
I am intending to use CHS (Clipboard Help & Spell) as my de facto bookmarks database for all browsers.

The background to this:
My view is that bookmarks are arguably an archaism. They were probably undeniably a "good idea" - and quite useful too - for whatever our "requirements" were construed to have been at the time they were invented, but they would not seem to have really evolved all that much since then to meet what our requirements might have evolved into today. In other words, bookmarks would seem to be more of a customary hangover from the days when our requirements were otherwise than what they might be today.
Nowadays though, look what happens when I, for example, bookmark this page in Firefox: (press Ctrl+D)


This current manifestation of this bookmark form in Firefox v38.0 Beta includes:
1. A field containing the string "page name" (data captured at this point) for edit/acceptance.
2. A field containing the string"URL" (data captured at this point) for edit/acceptance.
3. A field displaying the Folder name for storing the bookmark (the user either accepts the default or selects another folder).
4. A field for entering string(s) for "Tag name" - an optional data entry/edit/selection field.
5. A field for entering string(s) for "Keyword" - an optional data entry/edit/selection field.
6. A field for entering the string "Description" - the de facto page subject name, with optional data edit/entry.

This website page bookmark data/metadata is stored in a proprietary Firefox bookmark database, which is apparently different to the proprietary IE bookmark database, which is apparently different to the proprietary Google Chrome bookmark database, etc.
Since one may understandably wish to use the same bookmarks in the same browser but using different PCs in different physical locations, or across different browsers in the same or different PCs, then there is a potential problem - in that this is a recipe for potential bookmark duplication, loss and confusion.
The problem is compounded when one realises how one has to use a search system peculiar to each browser if one wants to access those bookmarks in those proprietary browser bookmark databases. Then, of course, there's the problem of syncing or backup/recovery of those proprietary browser bookmark databases. It's all a manmade PITA, but at least it's avoidable - which is why I pretty much abandoned the use of those proprietary browser bookmark databases, though they are useful for my children's browsing.

I refuse to accept being locked-in to any given browser like that, as I use up to 3 browsers, and I want to be able to standardise the use of bookmarks across browsers, without having to worry about tripping over duplication/loss, differing standards or other idiosyncrasies.
Thus, one of the objectives of my CHS feature request (above) is that one could perhaps start to move towards having a "browser-agnostic" bookmark database holding a common set of bookmarks for all browsers to use.
Certainly, CHS would seem a logical tool to capture the bookmark data and store it, and as for accessing it for use in a browser, then possibly (say) FARR could even be the medium of access linking to CHS for this database. There is after all a "set" of DC tools that could possibly be integrated to provide the required functionality to meet a given requirement:

For example:
Re: Feature request : input field
let me consider the possibility of having chs interact or hand off to another tool as it pastes -- that might make it easy to do what you want and keep chs from getting over complicated.

I thought this looked very positive.    Thmbsup
We shall see.
127  News and Reviews / Mini-Reviews by Members / Re: WizNote (a PIM from China) - Mini-Review + Provisional User Forum on: April 11, 2015, 02:32:10 AM
@motion12, looks like one can install the remote part on own server now. Pretty big deal if you ask me.
I haven't done it. Looks enterprisey.
@Ian, which is your primary notetakes now, onenote or wiznote? You have been a vocal advocate of both for a while smiley
Trying to make this decision myself. ...

My response:
  • I hadn't realised that about installing the remote part of WizNote on your own server. That's very interesting.

  • My primary notetaker is currently OneNote. Though OneNote still feels to me a bit like it is in Beta test, one of its major advantages for me is the integration with MS Office 2013 products and IE. However, that is only on one laptop. I have been experiencing so many problems which seem to prohibit smooth installation on a second laptop, and MS Suoort is so silent on the problems (which are common to many other users), that I am wondering whether this isn't a cunning strategy by MS to force you onto Office 365 (which does not meet my requirements). The upshot is a lack of trust - I do not trust MS not to take advantage of me.
    I am trying to wean myself off of the legacy InfoSelect8, and migrate to OneNote, but it will probably take a while yet. Inertia.

  • WizNote is waiting in the wings there with a tad too many uncertainties/unknowns for me to make a rational decision about it. So it's wait and see.
128  Main Area and Open Discussion / General Software Discussion / Re: Firefox Extensions: Bookmark Manager AM-Deadlink discontinued and crippled on: April 09, 2015, 02:52:28 PM
Useful catch here: Bookmark Manager AM-Deadlink discontinued and crippled - gHacks Tech News

This is pretty useful too: The ultimate bookmarks guide - gHacks Tech News
129 Software / Screenshot Captor / Re: Scan Image File Saving to PDF without also creating image file on: April 09, 2015, 02:12:06 PM
The Fujitsu fi-7160 - that's one nice scanner you have there.

Most modern scanners come with bundled software that enables the scan of a document image directly to PDF (and usually OCR it as well) without creating or otherwise leaving any intermediate .jpg or other images lying around. If that were the case, then using Screenshot Captor to "drive" the scanner would seem to be superfluous and not a good idea - i.e., the bundled software would be the most appropriate driver for the scanner.

The Fujitsu fi-7160 scanner would seem to be no exception to this - it has the PaperStream IP (Image Processing) and PaperStream Capture software bundled with it.
The PaperStream IP spec says: (at )
  • Cleans up the toughest documents, including decorated backgrounds, for improved OCR, reduced rescans, and curtailed specialized profile creation
  • Auto-rotates for less paper preparation and automatically fills in hole-punches and torn edges
  • Color Clean Up creates a uniform background for better reproduction and reduced file sizes on color scans

The PaperStream Capture spec. says: (at
Standard File Outputs
Scan to PDF, PDF/A, PDF with OCR, TIFF Group 4, Multipage TIFF, JPEG, BMP with a single click. ...

I have found that, if one does not have the latest software for a scanner, then one is usually able to download/upgrade it for free for one's scanner from the manufacturer's Support website.
Have you any experience of the Fujitsu ScanSnap scanner as well, or is it just the Fujitsu fi-7160? I'd be interested in any user experiences of and comments on these scanners.
130  Main Area and Open Discussion / General Software Discussion / Re: So, what pdf reader app is your fav? on: April 09, 2015, 01:31:10 PM
I'm still using PDF-XChange Viewer (latest $FREE version) after this: PDF-XChange Viewer ($FREE version) - Mini-Review.
The OCR is pretty good.
131  Main Area and Open Discussion / Living Room / Re: Knight to queen's bishop 3 - Google so transparent that they are opaque? on: April 09, 2015, 01:02:41 AM
Following this comment: (my emphasis)
Potentially relevant to this thread - I just received this email (follows) from Google:
(Copied below sans embedded hyperlinks/images, but I have given just the basic links without all the concealed Google/NSA ID coding that was in the hyperlinks.)
Google regularly receives requests from governments and courts around the world to hand over our users' data. When we receive government requests for users' personal information, we follow a strict process to help protect against unnecessary intrusion.

Since 2010, we have regularly updated the Google Transparency Report with details about these requests. As the first company to release the numbers, as well as details of how we respond, we've been working hard for more transparency.

- whilst at the time I regarded it skeptically as probably loaded with BS and corporate doublespeak, there was no indication that I just may have been right - that is, until I read with interest today this rather long and apparently well-researched and informative article by Ben Edelman:
(Copied below sans the many embedded hyperlinks and cross-references.)
Beyond the FTC Memorandum: Comparing Google's Internal Discussions with Its Public Claims

April 1, 2015

Disclosure: I serve as a consultant to various companies that compete with Google. That work is ongoing and covers varied subjects, most commonly advertising fraud. I write on my own—not at the suggestion or request of any client, without approval or payment from any client.

Through a FOIA request, the Wall Street Journal recently obtained--and generously provided to the public--never-before-seen documents from the FTC's 2011-2012 investigation of Google for antitrust violations. The Journal's initial report (Inside the U.S. Antitrust Probe of Google) examined the divergence between the staff's recommendation and the FTC commissioners' ultimate decision, while search engine guru Danny Sullivan later highlighted 64 notable quotes from the documents.

In this piece, I compare the available materials (particularly the staff memorandum's primary source quotations from internal Google emails) with the company's public statements on the same subjects. The comparison is revealing: Google's public statements typically emphasize a lofty focus on others' interests, such as giving users the most relevant results and paying publishers as much as possible. Yet internal Google documents reveal managers who are primarily focused on advancing the company's own interests, including through concealed tactics that contradict the company's public commitments.

About the Document

In a 169-page memorandum dated August 8, 2012, the FTC's Bureau of Competition staff examined Google's conduct in search and search advertising. Through a Freedom of Information Act (FOIA) request, the WSJ sought copies of FTC records pertaining to Google. It seems this memorandum was intended to be withheld from FTC's FOIA request, as it probably could have been pursuant to FOIA exception 5 (deliberative process privilege). Nonetheless, the FTC inadvertently produced the memorandum – or, more precisely, approximately half the pages of the memorandum. In particular, the FTC produced the pages with even numbers.

To ease readers' analysis of the memorandum, I have improved the PDF file posted by the WSJ. Key enhancements: I used optical character recognition to index the file's text (facilitating users' full-text search within the file and allowing search engines to index its contents). I deskewed the file (straightening crooked scans), corrected PDF page numbering (to match the document's original numbering), created hyperlinks to access footnotes, and added a PDF navigation panel with the document's table of contents. The resulting document: FTC Bureau of Competition Memorandum about Google – August 8, 2012.

AdWords API restrictions impeding competition

In my June 2008 PPC Platform Competition and Google's "May Not Copy" Restriction and July 2008 congressional testimony about competition in online search, it seems I was the first to alert policy-makers to brazen restrictions in Google's AdWords API Terms and Conditions. The AdWords API provided full-featured access to advertisers' AdWords campaigns. With both read and write capabilities, the AdWords API provided a straightforward facility for toolmakers to copy advertisers' campaigns from AdWords to competing services, optimize campaigns across multiple services, and consolidate reporting across services. Instead, Google inserted contractual restrictions banning all of these functions. (Among other restrictions: of] the additional overhead needed to manage these other networks [in light of] the small amount of additional traffic” (staff memo at p.48, citing GOOGWOJC-000044501-05). Holden indicated that removing AdWords API restrictions would pave the way to more advertisers using more ad platforms, which he called a “significant boost to … competitors” (id.). He further confirmed that the change would bring cost savings to advertisers, noting that Microsoft and Yahoo “have lower average CPAs” (cost per acquisition, a key measure of price) (id.), meaning that advertisers would be receptive to using those platforms if they could easily do so. Indeed, Google had known these effects all along. In a 2006 document not attributed to a specific author, the FTC quotes Google planning to “fight commoditization of search networks by enforcing AdWords API T&Cs” (footnote 546, citing GOOGKAMA-0000015528), indicating that AdWords API restrictions allowed Google to avoid competing on the merits.

The FTC staff report reveals that, even within Google, the AdWords API restrictions were controversial. Holden ultimately sought to “to eliminate this requirement” (key AdWords API restrictions) because the removal would be “better for customers and the industry as a whole” since it would Specialized search and favoring Google's own services: benefiting users or Google?

For nearly a decade, competitors and others have questioned Google's practice of featuring its own services in its search results. The core concern is that Google grants its own services favored and certain placement, preferred format, and other benefits unavailable to competitors – giving Google a significant advantage as it enters new sectors. Indeed, anticipating Google's entry and advantages, prospective competitors might reasonably seek other opportunities. As a result, users end up with fewer choices of service providers, and advertisers with less ability to find alternatives if Google's offerings are too costly or otherwise undesirable.

Against this backdrop, Google historically claimed its new search results were “quicker and less hassle” than alternatives, and that the old “ten blue links” format was outdated. “ [W]e built Google for users,” the company claimed, arguing that the design changes benefit users. In a widely-read 2008 post, Google Fellow Amit Singhal explained Google's emphasis on “the most relevant results” and the methods used to assure result relevance. Google's “ Ten things we know to be true” principles begin with “focus on the user,” claiming that Google's services “will ultimately serve you [users], rather than our own internal goal or bottom line.”

With access to internal Google discussions, FTC staff paint quite a different picture of Google's motivations. Far from assessing what would most benefit users, Google staff examine the “threat” (footnote 102, citing GOOG-ITA-04-0004120-46) and “challenge” of “aggregators” which would cause “loss of query volumes” to competing sites and which also offer a “better advertiser proposition” through “cheaper, lower-risk” pricing (FTC staff report p.20 and footnote 102, citing GOOG-Texas-1486928-29). The documents continue at length: “the power of these brands [competing services] and risk to our monetizable traffic” (footnote 102, citing GOOG-ITA-05-0012603-16), with “merchants increasing % of spend on” competing services (footnote 102, citing GOOG-ITA-04-0004120-46). Bill Brougher, a Google product manager assessed the risks:

    [W]hat is the real threat if we don't execute on verticals? (a) loss of traffic from because folks search elsewhere for some queries; (b) related revenue loss for high spend verticals like travel; (c) missing opty if someone else creates the platform to build verticals; (d) if one of our big competitors builds a constellation of high quality verticals, we are hurt badly

(footnote 102, citing GOOG-ITA-06-0021809-13) Notice Brougher's sole focus on Google's business interests, with not a word spent on what is best for users.

Moreover, the staff report documents Google's willingness to worsen search results in order to advance the company's strategic interests. Google's John Hanke (then Vice President of Product Management for Geo) explained that “we want to win [in local] and we are willing to take some hits [i.e. trigger incorrectly sometimes]” (footnote 121, citing GOOG-Texas-0909676-77, emphasis added). Google also proved willing to sacrifice user experience in its efforts to demote competing services, particularly in the competitive sector of comparison shopping services. Google used human “raters” to compare product listings, but in 2006 experiments the raters repeatedly criticized Google's proposed changes because they favored competing comparison shopping services: “We had moderate losses [in raters' assessments of quality when Google made proposed changes] because the raters thought this was worse than a bizrate or nextag page” (footnote 154, citing GOOGSING-000014116-17). Rather than accept raters' assessment that competitors had high-quality offerings that should remain in search results, Google changed raters' criteria twice, finally imposing a set of criteria in which competitors' services were no longer ranked favorably (footnote 154, citing GOOGEC-0168014-27, GOOGEC-0148152-56, GOOGC-0014649).

Specialized search and favoring Google's own services: targeting bad sites or solid competitors?

In public statements, Google often claimed that sites were rightly deprioritized in search results, indicating that demotions targeted “low quality,” “shallow” sites with “duplicate, overlapping, or redundant” content that is “mass-produced by or outsourced to a large number of creators … so that individual pages or sites don't get as much attention or care.” Google Senior Vice President Jonathan Rosenberg chose the colorful phrase “faceless scribes of drivel” to describe sites Google would demote “to the back of the arena.”

But when it came to the competing shopping services Google staff sought to relegate, Google's internal assessments were quite different. “The bizrate/nextag/epinions pages are decently good results. They are usually well-format[t]ed, rarely broken, load quickly and usually on-topic. Raters tend to like them. …. [R]aters like the variety of choices the meta-shopping site(s) seem… to give” (footnote 154, citing GOOGSING-000014375).

Here too, Google's senior leaders approved the decision to favor Google's services. Google co-founder Larry Page personally reviewed the prominence of Google's services and, indeed, sought to make Google services more prominent. For example: “Larry thought product [Google's shopping service] should get more exposure” (footnote 120, citing GOOG-Texas-1004148). Product managers agreed, calling it “strategic” to “dial up” Google Shopping (footnote 120, citing GOOG-Texas-0197424). Others noted the competitive importance: Preferred placement of Google's specialized search services was deemed important to avoid “ced[ing] recent share gains to competitors” (footnote 121, citing GOOG-Texas-0191859) or indeed essential: “most of us on geo [Google Local] think we won't win unless we can inject a lot more of local directly into google results” (footnote 121, citing GOOGEC-0069974). Assessing “Google's key strengths” in launching product search, one manager flagged Google's control over “ real estate for the ~70MM of product queries/day in US/UK/De alone” (footnote 121, citing GOOG-Texas-0199909), a unique advantage that competing services could not match.

Specialized search and favoring Google's own services: algorithms versus human decisions

A separate divergence from Google's public statements comes in the use of staff decisions versus algorithms to select results. Amit Singhal's 2008 post presented the company's (supposed) insistence on “no manual intervention”:

    In our view, the web is built by people. You are the ones creating pages and linking to pages. We are using all this human contribution through our algorithms. The final ordering of the results is decided by our algorithms using the contributions of the greater Internet community, not manually by us. We believe that the subjective judgment of any individual is, well ... subjective, and information distilled by our algorithms from the vast amount of human knowledge encoded in the web pages and their links is better than individual subjectivity.

2011 testimony from Google Chairman Eric Schmidt (written responses to the Senate Committee on the Judiciary Subcommittee on Antitrust, Competition Policy, and Consumer Rights) made similar claims: “The decision whether to display a onebox is determined based on Google's assessment of user intent” (p.2). Schmidt further claimed that Google displayed its own services because they “are responsive to what users are looking for,” in order to “enhance[e] user satisfaction" (p.2).

The FTC's memorandum quotes ample internal discussions to the contrary. For one, Google repeatedly changed the instructions for raters until raters assessed Google's services favorably (the practice discussed above, citing and quoting from footnote 154). Similarly, Page called for “more exposure” for Google services and staff wanted “a lot more of local directly into search results” (cited above). In each instance, Google managers and staff substituted their judgment for algorithms and user preferences as embodied in click-through rate. Furthermore, Google modified search algorithms to show Google's services whenever a “blessed site” (key competitor) appeared. Google staff explained the process: “Product universal top promotion based on shopping comparison [site] presence” (footnote 136 citing GOOGLR-00161978) and “add[ing] a 'concurring sites' signal to bias ourselves toward triggering [display of a Google local service] when a local-oriented aggregator site (i.e. Citysearch) shows up in the web results” (footnote 136 citing GOOGLR-00297666). Whether implemented by hand or through human-directed changes to algorithms, Google sought to put its own services first, contrary to prior commitments to evenhandedness.

At the same time, Google systematically applied lesser standards to its own services. Examining Google's launch report for a 2008 algorithm change, FTC staff said that Google elected to show its product search OneBox “regardless of the quality” of that result (footnote 119, citing GOOGLR-00330279-80) and despite “pretty terribly embarrassing failures” in returning low-quality results (footnote 170, citing GOOGWRIG-000041022). Indeed, Google's product search service apparently failed Google's standard criteria for being indexed by Google search (p.80 and footnote 461), yet Google nonetheless put the service in top positions (p.30 and footnote 170, citing GOOG-Texas-0199877-906).

The FTC's documents also call into question Eric Schmidt's 2011 claim (in written responses to a Senate committee) that “universal search results are our search service -- they are not some separate 'Google product or service' that can be 'favored.'” The quotes in the preceding paragraph indicate that Google staff knew they could give Google's own services “more exposure” by “inject[ing] a lot more of [the services] into google results.” Whether or not these are “separate” services, they certainly can be made more or less prominent--as Google's Page and staff recognized, but as Schmidt's testimony denies. Meanwhile, in oral testimony, Schmidt said “I'm not aware of any unnecessary or strange boosts or biases.” But consider Google's “concurring sites” feature, which caused Google services to appear whenever key competitors' services were shown (footnote 136 citing GOOGLR-00297666). This was surely not genuinely “necessary” in the sense that search could not function without it, and indeed Google's own raters seemed to think search would be better without it. And these insertions were surely “strange” in the sense that they were unknown outside Google until the FTC memorandum became available last week. In response to a question from Senator Lee, asking whether Google “cooked it” to make its results always appear in a particular position, Schmidt responded “I can assure you, we've not cooked anything”--but in fact the “concurring sites” feature exactly guaranteed that Google's service would appear, and Google staff deliberated at length over the position in which Google services would appear (footnote 138).

All in all, Google's internal discussions show a company acutely aware of its special advantage: Google could increase the chance of its new services succeeding by making them prominent. Users might dislike the changes, but Google managers were plainly willing to take actions their own raters considered undesirable in order to increase the uptake of the company's new services. Schmidt denied that such tampering was possible or even logically coherent, but in fact it was widespread.

Payments to publishers: as much as possible, or just enough to meet waning competition?

In public statements, Google touts its efforts to “ help… online publishers … earn the most advertising revenue possible.” I've always found this a strange claim: Google could easily cut its fees so that publishers retain more of advertisers' payments. Instead, publishers have long reported – and the FTC's document now explicitly confirms – that Google has raised its fees and thus cut payments to publishers. The FTC memorandum quotes Google co-founder Sergey Brin: “Our general philosophy with renewals has been to reduce TAC across the board” (footnote 517, citing GOOGBRIN-000025680). Google staff confirm an “overall goal [of] better AFS economics” through “stricter AFS Direct revenue-share tiering guidelines” (footnote 517, citing GOOGBRAD-000012890) – that is, lower payments to publishers. The FTC even released revenue share tiers for a representative publisher, reporting a drop from 80%, 85%, and 87.5% to 73%, 75%, and 77% (footnote 320, citing GOOG-AFS-000000327), increasing Google's fees to the publisher by as much as 84%. (Methodology: divide Google's new fee by its old fee, e.g. (1-0.875)/(1-0.77)=1.84.)

The FTC's investigation revealed the reason why Google was able to impose these payment reductions and fee increases: Google does not face effective competition for small to midsized publishers. The FTC memorandum quotes no documents in which Google managers worry about Microsoft (or others) aggressively recruiting Google's small to midsized publishers. Indeed, FTC staff report that Microsoft largely ceased attempts in this vein. (Assessing Microsoft's withdrawal, the FTC staff note Google contract provisions preventing a competing advertising service from bidding only on those searches and pages where it has superior ads. Thus, Microsoft had little ability to bid on certain terms but not others. See memorandum p.106.)

The FTC notes Microsoft continuing to pursue some large Google publishers, but with limited success. A notable example is AOL, which Google staff knew Microsoft “aggressively woo[ed] … with large guarantees” (p.108). An internal Google analysis showed little concern about losing AOL but significant concern about Microsoft growing: “AOL holds marginal search share but represents scale gains for a Microsoft + Yahoo! Partnership… AOL/Microsoft combination has modest impact on market dynamics, but material increase in scale of Microsoft's search & ads platform” (p.108). Google had historically withheld many features from AOL, whereas AOL CEO Tim Armstrong sought more. (WSJ reported: “Armstrong want[ed] AOL to get access to the search innovation pipeline at Google, rather than just receive a more basic product.”) By all indications Google accepted AOL's request only due to pressure from Microsoft: A Critical Perspective

The WSJ also recently flagged Google's “close ties to White House,” noting large campaign contributions, more than 230 meetings at the White House, high lobbying expenditures, and ex-Google staff serving in senior staff positions. In an unusual press release, the FTC denied that improper factors affected the Commission's decision. Google's Rachel Whetstone, SVP Communications and Policy, responded by shifting focus to WSJ owner Rupert Murdoch personally, then explaining that some of the meetings were industry associations and other matters unrelated to Google's competition practices.

Without records confirming discussion topics or how decisions were made, it is difficult to reach firm conclusions about the process that led the FTC not to pursue claims against Google. It is also difficult to rule out the WSJ's conclusion of political influence. Indeed, Google used exactly this reasoning in critiquing the WSJ's analysis: “We understand that what was sent to the Wall Street Journal represents 50% of one document written by 50% of the FTC case teams.” Senator Mike Lee this week confirmed that the Senate Committee on the Judiciary will investigate the possibility of improper influence, and perhaps that investigation will yield further insight. But even the incomplete FTC memorandum reproduces scores of quotes from Google documents, and these quotes offer an unusual opportunity to compare Google's internal statements with its public claims. Google's broadest claims of lofty motivations and Internet-wide benefits were always suspect, and Google's public statements fall further into question when compared with frank internal discussions.

There's plenty more to explore in the FTC's report. I will post the rest of the document if a further FOIA request or other development makes more of it available.
132  Main Area and Open Discussion / General Software Discussion / Re: Microsoft OneNote - Office Lens for iPhone and Android phones on: April 07, 2015, 04:49:27 AM
I had pointed out elsewhere that MS Office Lens on the Windows phone looked potentially useful:
3. Office Lens for capturing documents and whiteboards with your Windows Phone. Amazing. Potentially very useful. Just what I was needing/wanting. Now all I need is a Windows phone, and this is a good reason for getting one (so I can test Office Lens). ...

I still have not justified getting a Windows Phone, so I was pleased to read today that:
Office Lens comes to iPhone and Android - Office Blogs
by OneNote Team, on April 2, 2015 | 16 Comments | 0

Just over a year ago, we introduced Office Lens for Windows Phone—and over that time the app has become one of the most popular free apps on Windows Phone, with an average rating of 4.6 stars (out of 5) from more than 18,500 reviews.

Today, we’re releasing Office Lens for iPhone and Android phones.

Office Lens is a handy capture app that turns your smartphone into a pocket scanner and it works with OneNote so you’ll never lose a thing. Use it to take pictures of receipts, business cards, menus, whiteboards or sticky notes—then let Office Lens crop, enhance and save to OneNote. Just like that—all the scanned images you capture from Office Lens are accessible on all your devices. ...(read more at the link).
133  Main Area and Open Discussion / General Software Discussion / Re: Microsoft OneNote - some experiential Tips & Tricks on: April 07, 2015, 04:20:00 AM
It's nice see that OneNote is now a free program.
Do have one hick with this software.
The other day, tried to import a .pdf file but it created separate "pages" for each page instead of one page (as hoped for) but ended up with over 400!
Evidently i went on to delete the OneNote pages but it seems that we can only delete one page at a time.
Fortunately, i was able to transfer the other pages to another "tab" then simply delete the one with all the extra pages.
Wish there was a way to batch delete some pages instead of one at a time.

As well as my comment above, I though I might respond to this more specifically (and apologies for ny duplication).
  • Definition of "Free": I personally wouldn't describe what Microsoft offer as a "free" OneNote program as being a truly free program. There are too many hooks/constraints associated with it, and it's not really all that usable compared to the full client-based software (which comes with MS Office). For that reason, I would not recommend it, except possibly as a taste of some of what OneNote could do.
    However, if you wanted to trial OneNote, then you could download a free trial of the full MS Office 2013, which would be good for a 60-day trial:
    60-day evaluation copy of Office Professional Plus 2013
    EDIT 2015-04-09: fixed bad link here.
    The download is:
    • Office Professional Plus 2013 32-bit IMG - this is the one you will most likely need; a 666MB file.
    • Office Professional Plus 2013 64-bit IMG

    Be warned though that the experiences of many users - myself included - who have downloaded this seems to be that this download will not always install for various obscure reasons, and there seems to be little or no support offered from Microsoft for users to get the thing working. Amazing.

  • Inserting PDF files: The issue with inserting PDF files is that you can either insert them into a OneNote page as an actual file, or as an image printout of the file contents. In the latter case, OneNote effectively is set up as an output device (a printer) in your devices - named "Send To OneNote 2013", or similar. This is what you seem to have done. I can't really see that it is all that useful - i.e., why would a user want to do this? When you get a separate OneNote page for each page of the PDF file, then it is a PITA.
    However, once you discover that you have all these unwanted pages, the correct/quickest way to delete such pages is as a "batch". You go to the first page, select that page's tab (usually 2 clicks or until it goes grey) and then scroll down the page tabs to the last page in the series that you want deleted and select that whist holding down the Shift key - that selects all the pages in that range, from 1st to last. Then press the delete key, which removes the pages to Trash, wherefrom they will be deleted permanently after 60 days or so (default) or whatever is the user setting, if different. The delete is thus undoable/reversible if you change your mind about it.

    • 1. OneNote and OCR: The printed images of those PDF file pages are in a "background" image, or something, and even if you have set OneNote to auto-OCR text in images, it will not OCR "background" images. I seem to recall that, if you want the text of those images to be OCRed and indexed for subsequent search, you have to select every image and bring it to foreground, and then check that those will be or are being auto-OCRed by default. As I said above, a PITA.
      Thus, if you want the text in the images in a PDF file to be indexed for search, then OCR the images in the PDF file, and rely on your Windows Desktop Search to do the indexing/searching (as it can do with text in .TIF image files).

    • 2. OneNote OCR threshold text: I have noticed that OneNote will not OCR scan and index text in any image on a page where the amount of text is below some undefined threshold.
      For example, yesterday, I clipped the image of a subtitle on a video where the text in the image was:
      "ice age is creeping over the northern hemisphere even then it won't be as bad" (i.e., 15 words).
      I then got OneNote to select the text from the image , and OneNote reported a longish error message that began:
      This image does not contain any recognised text. ...

      So I clumped that subtitle clip with several others from the video and then consolidated them as a single, larger image:


       - and then got OneNote to select the text from the larger image and got (with errors, and with lines inserted to distinguish each imaged group of text):
      British professor Hubert lamb says that
      a new
      ice age is creeping cwer the northern
      hem isphere even then it 'MM1t be as bad
      as the last ice age sixty thousand years
      then NewYork Cincinnati Saint lost
      runner 5,000 be unified
      presumably no tramc movement school
      was let out for the day
      and thats the way it is Monday Sept
      11th 1972
134  Other Software / Developer's Corner / Re: Best programming language to pick up for applications? on: April 01, 2015, 05:54:39 PM
Maybe the Q here is whether you need to bother with a programming language at all - e.g., if (say) you could do most of it with an Access database and/or with Excel spreadsheets?   ohmy
I've seen that happen before...
So have I - hence why I've got nervous twitches right now smiley

Yes, "nervous twitches" would be about right. Well, I've come across all sorts of daft opinions - some of them mine, unfortunately - on this sort of issue, and I've sometimes had the opportunity to see how those that got a chance to go ahead (none of mine, fortunately) worked out in practice/implementation. A couple of times I've seen these result in screw-ups of such a high order of magnitude that they literally broke the companies involved and caused them to fail financially. In each of these cases I could literally see it coming and watched in awe as events unfolded to their predictable collapse. In two cases I stuck with the failing companies (as an employee) as I could see the proverbial pot of gold at the end of the rainbow, and was quite happy with the outcome (financial reward), though the wait could be quite stressful at times.
So, to avoid getting involved in a repeat of those past hard lessons of history, I usually apply a straightforward risk management approach that involves figuring out what has tended to work best in the past from a combination of business needs coupled together with my growing and vicarious experience of dealing with similar business and technical issues.

That approach can potentially save a lot of time, cost, anguish and jobs, and ultimately keep the customers happy with a businesslike and cost-effective outcome.
I am still learning, but I detect an awful sameness or deja vu in some of the daft opinions that I come across. Stupidity, in the form of wilful blindness to the potential and often predictable consequential risks arising from our (often ego-centric) actions, seems to be the sole prerogative of us humans, and it seems to be dreadfully prevalent or over-represented in ICT-related fields or where ICT is more heavily involved, and where you might reasonably expect the average IQ of those involved to be relatively high.
As a classic example, I include in my vicarious experience:
Methodology wars: The conflicted debate between people of two firmly opposed schools of intellectual thought regarding the best/purest theoretical methodology to be adopted and used for information engineering, amongst designers in a very large and mission-critical banking systems redevelopment project that was 6 months underway. The debate seemed devoid of pragmatism and seemed to become a form of intellectual masturbation and an excuse for people to avoid engaging in doing any actual productive work and instead holding lots of unproductive meetings to variously "discuss the methodology issues" and "review/revise requirements" and "re-plan the project" in the light of the proposed changes in the methodology direction. They also made persuasive presentations of their sometimes off-the planet ideas to management in the client accounts and the "home" organisation.
This merry-go-round of meetings and presentations tended to mask the reality that the designers and project management were incompetent. They were listened to because, as a group, they were well-regarded (several held PhD's) and had been chosen for the job, and it was thought they knew what they were on about.
After having created unscheduled project delays amounting to about 18 months, this nonsense eventually and predictably expedited termination. The Board forcefully retired the CEO and replaced him with a hardened businessman who was directed to dismantle and sell off the company. He then closed down the project with prejudice - including senior (VP) corporate executive level firings for those held responsible or who tried to get in his way, and mass layoffs of most/all project personnel. The residual core business, consisting of profitable and operational components was sold off as a single block in a bidding process, going to the successful bidder - one of 3 large ICT services corporations that were in the bidding.
/Rant Off.
135  Other Software / Developer's Corner / Re: Best programming language to pick up for applications? on: April 01, 2015, 11:16:26 AM
Maybe the Q here is whether you need to bother with a programming language at all - e.g., if (say) you could do most of it with an Access database and/or with Excel spreadsheets?   ohmy
I've seen that happen before...
136  Main Area and Open Discussion / General Software Discussion / Re: [IDEA] text recognition on: April 01, 2015, 06:17:07 AM
Heh Kalos,
You're shortly drifting into an old idea I had a few years ago. Depending on what your company is willing to spend a little on some of the stuff you are looking for, by now you have enough of these "medium-low-level" requests that it could be interesting if they could all begin to be batched into a couple of apps.

Until I ran out of money, I had the idea I would commission external programmers to start with a basic open source "shell" program. Mine happened to start with a word processor. Then the programmers began adding all kinds of custom power tools to it, to process text and do stuff. So then unlike Microsoft Word that just comes off a shelf and stays there, you get a legit core program (like a word processor) that then has things like "tools" menus that build in all the neat tricks you want.

Reminds me of Pathagoras - a really brilliantly useful documentation tool bolted-on to MS Word.
137  Main Area and Open Discussion / General Software Discussion / Re: [IDEA] text recognition on: April 01, 2015, 06:13:05 AM
...that's why, it would be nice to have a program that we will select the part of the screen that displays the term we want to look up, and the program will OCR it and then match it in the database and display the associated text in a popup!

If that is your requirement, then you can effectively do that using MS OneNote, thus:
  • Step 1: Press hotkey Win+Shift+S, which pops up the cross-hairs for image capture.
  • Step 2: Drag the crosshairs to enclose the desired image area (including any text you want to capture), and then right-click the mouse. This captures the area into OneNote.
  • Step 3: Alt-Tab to OneNote which automatically displays the page with the just-captured image. If you have set OneNote by default to automatically OCR any text in any image, then you can right-click the image and select the now OCRed text you are interested in. All the text in the image will be automatically indexed by OneNote and thus becomes searchable.

I made a post in the Clipboard Help+Spell discussion area about something similar to this: Feature request: automatic OCR of captured images.

If the reference database you want to query is in OneNote, then it would be very simple to search for any occurrence of the scanned (OCRed) string of characters in OneNote.
Otherwise, using (say) AHK, you could select the string and initiate it as a search in some other database, or on the Internet, or in the DC Forum.
138 Software / DC Member Programs and Projects / Re: Sumatra Highlight Helper on: April 01, 2015, 05:34:30 AM
@Nod5: The comments you make in response to, and including @superboyac's comments bring some of the MS OneNote Notebook ergonomics and techniques to mind.
For example:
  • earmarking: though "earmarking" is probably not the correct English term for what is being suggested (I think making a dog-ear as a bookmark might be what was intended), you can meet the description it is given in the above comments by the use of "Tags" in OneNote and also with typed or handwritten notes added anywhere on the page being viewed.

  • finding stuff: you can automatically index/search and/or make a hyperlink for any word, phrase or image in the reading matter or comments (hyperlinking  being similar to  a Wiki) and index/search OCR'd text in images. The reader can also easily add copious notes at the hyperlink page - which typically could be (say) in another Notebook, away from the actual hyperlinked text, if required.

  • display: the Notebooks can be viewed in a 2 or 3-pane display (Notebook/Section, Page/Subpages, Page Content (showing any notes or images added by any user/reader)

On the subject of a need for a book-like appearance: In terms of priorities of requirements (e.g., A=Mandatory, B=Highly Desirable, C=Nice-to-Have) I would suggest that this is a C - i.e., more of a touchy-feely thing to help people get over their natural resistance to the transition from an analogue to a digital medium. So it could be a distraction from meeting fundamental requirements of the A + B categories. Trying to meet a C requirement could turn out to be a potential bottomless pit for development costs and with ultimately very little real added value.

If you are focussed on PDF files, then you have a relatively narrow focus on one of several restrictive generic or third-party technologies, over which you have no real control.
However, if you are seeking to improve that technology so that reading and making notes on the PDF digital medium becomes at least as easy as and as ergonomically efficient as an analogue medium - preferably better in both cases - then you are on a relatively well-trodden path that was arguably kicked into the Big Time by the advent of things like the Nook or the Kindle - though both are seemingly designed as proprietary market entrapment devices.
However, you are arguably amongst friends/helpers who may have gone before you, in the form of the developers responsible for the development of reference/reading management systems - refer for example:

If you have not already done so, then I would suggest that having some examination/trial of those free systems, and some collaborative dialogue with their developers and about their forward plans might be very useful - even synergistic in effect - and might even help you to avoid re-inventing the wheel in some areas, to some extent.
139  Other Software / Developer's Corner / Re: Best programming language to pick up for applications? on: April 01, 2015, 03:47:15 AM
To as great an extent as possible, the business requirements for the application should drive the approach to the selection of programming tools/languages. It should not be a "technical" IT decision per se, nor driven by vendors peddling a favoured or proprietary approach.

Programmers generally make mistakes, and there is generally no real "best" programming language. So one should seek to avoid coding/logic errors and endeavour to use suitable programming languages for any given case(s) - which was why I pointed to this as a potentially useful tool for automating code generation:
Experts and beginners alike might be interested in taking a look at this: PWCT- Programming Without Coding Technology: Free Science & Engineering software

Generally speaking, for a new application development, I would recommend that one consider selecting an assembler-level language for those components where max. speed was a priority - e.g., (say) for what could be frequently-used callable subroutines where speed and small size (overhead) could be important.

Otherwise, for the core application, a rule-of-thumb would be to use any decent language which is generally well-supported and more widely used - avoid the more obscure or less-used languages and scripting tools.
It's probably also worth considering at least trying to future-proof your application, so that it should be compatible with the more ubiquitous and/or likely forward technologies as possible. If you've ever had to support a core application with an embedded legacy technology component, you would appreciate this point.

Mind you, it probably can only be a good thing if you (say) happen to be the only COBOL programmer available who can support the legacy COBOL code embedded in a core part of a strategically important financial system which has to undergo mandatory development/change to meet some new statutory change or other.   ohmy
(Some people might say that that this is the sort of thing that occurred recently in a key NZ government software development project, but I couldn't possibly comment.)
140  Main Area and Open Discussion / Living Room / Re: silly humor - post 'em here! [warning some NSFW and adult content] on: April 01, 2015, 02:46:25 AM
Funny joke. But how come the Irish guy has a name but the Polish guy doesn't?
Yeah, I noticed that. It's the sort of thing that has been puzzling philosophers for centuries.

141  Main Area and Open Discussion / Living Room / Re: silly humor - post 'em here! [warning some NSFW and adult content] on: April 01, 2015, 12:09:24 AM
Irish men are smarter than Polish men.

A Polish guy and Murphy go into a pastry shop.

The Polish guy immediately whisks three cookies into his pocket with lightning speed. The baker doesn't even notice

The Polish guy then says to Murphy, "You see how clever we are? You Paddies can never beat that!"

Murphy says to the Polish guy, "Watch dis, any Paddy is smarter din you, and I'll prove it to ya."

Murphy says to the baker, "Gimme a cookie, I'll show ya a magic trick!"

The baker is interested and gives him the cookie, which Murphy promptly eats.

Then Murphy says to the baker, "Gimme anudder cookie for me magic trick."

The baker is getting suspicious, but he gives it to him. Murphy eats this one too.

Then Murphy says again, "Gimme one more cookie..."

By now, the baker is becoming annoyed, but gives him one anyway as he wants to see the magic trick.
Murphy eats this one too.

Now the baker is really mad, and he yells, "OK ... so where is your famous magic trick?"

Murphy says ...." Now look in the Polish guy's pocket!"
142  Main Area and Open Discussion / Living Room / Re: Reader's Corner - The Library of Utopia + resource links on: March 29, 2015, 11:11:00 AM
2015-03-30 0509hrs: Added:
- to the index table in the opening post.
143  Other Software / Developer's Corner / Re: Best programming language to pick up for applications? on: March 28, 2015, 10:35:21 AM
Experts and beginners alike might be interested in taking a look at this: PWCT- Programming Without Coding Technology: Free Science & Engineering software
144  Other Software / Developer's Corner / Re: PWCT- Programming Without Coding Technology: Free Science & Engineering software on: March 28, 2015, 10:33:05 AM
This is now up to PWCT 1.9 (Art) Rev. 2015.03.26.

There is an Introduction at
If you want to learn programming, create applications/systems or get some new ideas about visual programming in the practice then you are in the right place. The goal of this project is to present programming to every computer users, whether they are beginners or professionals. Beginners means that the tools of programming must be accessible – must be easy. So I decided to take coding out of programming. And presenting programming to professional developers requires a tool that is productive and unlimited and can be extended.

PWCT is a Free-Open Source project, also the documentation and the support is free. Installing PWCT on MS-Windows is easy through simple installation program, after downloading the software you can download many samples,tutorials and movies. Some of PWCT users are using the software to create presentations and education software. Many users are using the software for business applications. At my side I have used the software to create a new programming language as a proof that the technology is productive and powerful and unlimited. This language is called the Supernova programming language , and it is a free, Open Source project hosted on Sourceforge. So the software can be used in many different applications.

The domain of the problem is called “Visual Programming Languages.” There are many projects in this domain, but most of these languages are domain-specific languages that are used in education, But with respect to general-purpose visual programming languages, there are few of them. PWCT don't use the Drag-and-Drop method. PWCT provide a new method based on Automatic Steps Tree Generation and Update in response to interaction with components that provide to the user simple data entry forms. The idea behind this new method is to mix between programming using Diagrammatic approach and programming using Form-based approach where the integration between the two approaches are done seamlessly through an Automatic Visual Representation Generation process. This is just the basic idea and many other ideas are developed around this concept to get a practical general purpose visual programming language for real world tasks.

This is followed by an impressive list of features (see link for more).
145  News and Reviews / Mini-Reviews by Members / Re: Malwarebytes FREE and PRO/Premium - Mini-Review. on: March 27, 2015, 03:03:31 AM
As posted into the opening post:
UPDATE 2015-03-27: Release of version 2.1.4
146  News and Reviews / Mini-Reviews by Members / Re: Hard Disk Sentinel PRO - Mini-Review on: March 27, 2015, 12:14:48 AM
@4wd: Yes, I had come to the same conclusions as you seem to have done. There are some notes on the HDS site that indicate (in rather tortured English) more or less what the BackBlaze comment says (which I had not seen, so thanks for the link).

I think I may have deduced what those two SMART charts mean:
  • The SER (Seek Error Rate): the chart seems to be a graph over time showing when a Seek Error occurred and what the accumulated seek counter stood at, at that point.
  • The RRER (Raw Read Error Rate): the chart seems to be a graph over time showing when a Raw Read Error occurred and what the incremental read counter (reads since last error) stood at, at that point.

Explanation: Thus, we have, after an extended period of apparently improving stability/reliability (reducing frequency of errors), a second Seek/Raw Read error occurring on 2015-03-20 relatively soon after the last/preceding error, and then a third occurring relatively soon after the second.

We probably won't be able to establish what caused the errors, but I shall examine the Windows Events logs to see if anything shows there. However, we can see from the CHKDSK output (run after the SMART S/RR errors were charted) that CHKDSK:
  • corrected orphaned file errors, and found some unindexed files, in Stage 2.
  • corrected free space marked as allocated in the MFT, in Stage 3.
  • corrected free space marked as allocated in the volume bitmap, in Stage 3.

I don't know much about these things, but I would suppose that, if further S/RR errors occur within a short period, then there may be a problem causing reducing stability/reliability (i.e., increasing frequency of errors). Otherwise, the errors may be an improbable statistical coincidence, or the CHKDSK operation may have fixed something that could have been a causal problem of the errors, in (say) the file structure.
So, it's probably a case of "wait and see".

Yesterday I downloaded, installed and ran Seagate's proprietary SeaTools software to check this (Seagate) disk, and it checked out with no problems on an "extended test" run - and Seagate's own instructions are that if it passes that test then there is unlikely to be anything wrong with the disk.
I wouldn't have known about any of this at all if I had not had the HDS information charts showing the disk health status and the SE/RRE counters' data from that particular disk.
147  News and Reviews / Mini-Reviews by Members / Re: Hard Disk Sentinel PRO - Mini-Review on: March 25, 2015, 08:39:55 PM
I'm puzzled by this: this is for a Seagate USB 2.0 external hard drive (1TB).
HDS says the disk is in perfect condition, but these SMART Raw Read and Seek Error Rates from HDS look confusing to me.
What is going on with this drive?


I ran CHKDSK on it, and no major probs.:

148  Main Area and Open Discussion / General Software Discussion / Re: In search of an alternative to InfoSelect ... on: March 24, 2015, 07:39:34 AM
Thanks for your comment - I'm always interested in this thread.
Scrivener is pretty impressive, and I have trialled it and thought of using it, but the reason I am using OneNote is that I have discovered new requirements by using it, and they can't be met by other software that I have trialled, so far.
I have trialled IS9 but still haven't migrated away from IS8 though. It's hard to beat for my requirements.
I find your "multiple installations" (that's databases, I guess) for IS9 (IS 2007) a novel idea, but I would suggest that it may be unnecessary since one installation of IS can open several separate databases simultaneously or sequentially, as required. You don't need them all open all of the time, just the ones you are using. You can keep the databases automatically closed on startup by default, and just open the ones you want (and later close them and open others). That's a feature of IS that I have been using for years.
149  Main Area and Open Discussion / General Software Discussion / Re: how can I do this in excel? on: March 23, 2015, 08:36:49 AM
What are the business requirements please? I might be able to help.
150  Main Area and Open Discussion / Living Room / Army camouflage training on: March 23, 2015, 05:57:08 AM
Army camouflage training:

Pages: Prev 1 2 3 4 5 [6] 7 8 9 10 11 ... 176 Next | About Us Forum | Powered by SMF
[ Page time: 0.086s | Server load: 0.5 ]

Share on Facebook
submit to reddit