User talk:Inductiveload/Archives/2011

Layout override
Hi,

Is there some difference between layout_override.js and layout_override2.js in regards to performing a page purge or similar in its execution?

Override2 "seemed" to function as far as I was concerned in loading the selected layout for certain pages set up for that while behaving normal for the pages that weren't but the behavior stops performing flawlessly after a good session of viewing multiple pages.

I've noticed that if I cycle through a fair number of preset-for-a-default pages, "leave" for un-edited pages and later come back again to basically the same cycled set of pages, the behavior is gone and either goes right to Layout 1 if not visited earlier or the last Layout selected no matter if the page is preset or not. Neither hard or soft browser/cache refreshes bring up Layout 2 at that point either but a clock [gadget] purge forces it back to behave as designed - but for that page only. I would have to purge every page after that so that's why I ask if the old had a purge and the new(er) script does not or something along that line.

I could not repeat this under your first ( & still my preferred ;) override.js. -- George Orwell III (talk) 02:51, 10 September 2011 (UTC)


 * That could be the cookies talking. On closer inspection of what the cookies are storing, I have found they also store the path of the current page. The cookie I set is set for the path of the parent of the page you are on. So, for example, Gesenius' Hebrew Grammar/134. Syntax of the Numerals having the default layout set would not affect A Tramp Abroad/VIII, and vice versa. This is an artefact of how cookies are manipulated in others' code, and is consistent with the existing settings for dynamic layouts (whether that is appropriate is another discussion, personally I think it is a reasonable approach in concept, but perhaps not very self-evident). Could your experience be caused by that? If not, more investigation is required, since something is coming unstuck somewhere. Inductiveload— talk/contribs  09:50, 11 September 2011 (UTC)


 * Well I've gone down a list of ticking settings on & off in every combination I could think of and sooner or later the expected behavior for pre-set pages degrades to acting like any other page. I've noticed that the farther down I go in subpages the less stable the intial behavior becomes. Could this duality of paths being stored have urls encoded one way (without an hex, binary, etc. characters if you follow me here) at first but, for whatever reason, later some of those characters get introduced into the URL path and "it" thinks its visting a completely different URL or something akin to a mis-match ? -- George Orwell III (talk) 10:14, 11 September 2011 (UTC)
 * Which work does this problem occur in? I can't see any works which have this applied on multiple levels except Wonderful Balloon Ascents, and that seems to work no matter how much I click around (note that if you set the layout from within Part 1, Part 2 will not be affected, as it is a different path). Also, does the the button to toggle it appear correctly, or is it just that the setting is insufficiently "sticky"? Inductiveload— talk/contribs  10:07, 14 September 2011 (UTC)

Layouts + headers
Also, I'm wondering if you have the time if you'd take a look at my .css & .js - I've tried to "push" the common header "out" of being part the dynamic_layouts altogether. It seems to do the trick but I'm sure there is a more elegant way to utilize existing elements, ids or classes than introducing another div wrapper of sorts. -- George Orwell III (talk) 02:51, 10 September 2011 (UTC)
 * Yes, that is on my list to look at since other users have complained of the header begin "crushed" by the layout in Layout 2. This can be approached in two ways: either adjust the DL code at source, or move the header after the fact using local code. The former way would affect all wikis using DLs and the latter way would probably be doable by finding and moving the header HTML to before the DL container div, probably by using classes (not all the custom headers give a nice ID to glom on to). In the latter case, thought would also need to be given to things that come before headers like "similar" hatnotes and what-have-you. Inductiveload— talk/contribs  09:50, 11 September 2011 (UTC)
 * This all goes back to what I thought was a lame workaround at the time in wrapping the header in its own ID=headertemplate DIV just to get this dynamic stuff rolled out faster & with less objections. IMHO, a #contentSub3 DIV should have been created and the header template then loaded into it that if anything. The usrping (or rendering impotent?) of all the existing div wrappers in use prior to roll out was another 'nice touch'. So not only did the header become "crushed" under dynamic layouts, but so did auto generated wikiTOCs, reference/editor notes, copyright/license banners and similar footers when enabled. Those are what I call cracks in the foundation (of universal dynamic layout capability in the main namespace) btw. -- George Orwell III (talk) 10:36, 11 September 2011 (UTC)
 * Actually it's good practise and not at all lame to add IDs to unique elements on a page, so other scripts can find them later (and so the code is clear). Having said that a separate "id=metadata" div for header templates and such would be sensible too, as all the metadata can be handled together by CSS (I'll come to this). The fix is, on reflection, trivial:

$('#contentSub').nextUntil('#catlinks').wrapAll("<div.....
 * becomes

$('#headertemplate').nextUntil('#catlinks').wrapAll("<div
 * This means that everything after the header template is stuck in the "text-wrap" div, but the header template is unmolested in the "contentSub" div. This breaks "Layout 3" since the header is no longer where it was expected, but tweaking the top and right values can fix that. However in Layout 3, any additional metadata (eg. similar) is not taken with the header, so it gets stuck to the top of the content instead. This is already happening. A solution to that is to add a little code in the PageNumbers.js to stick all the metadata code into a new div and manipulate that with the Layout styles. This would need to either be done globally, affecting all WSes, or forked and done locally. Inductiveload— talk/contribs  09:53, 14 September 2011 (UTC)
 * The other points to consider is that when you rely on contentSub and/or contentSub2, you inherit their CSS settings in chick.css or main.css - making the fonts and layout even worse than those imposed by the DL's alone. No, I'm afraid the creation of a contentSub3 DIV to "hold" the header template(s) would be the optimal solution as it would be independent of any current or future Wiki-wide use of the standard contentSub & contentSub2 divs. In addition, I think the current CSS needs to have the header portions expanded to include & set font-families and such so the DL's don't go re-formating the header text the same as found in their text container settings when applied. The same way the header needs to be independent of DL so too should their be a section at the end of the text content that's not affected by DLs if need be. -- George Orwell III (talk) 02:57, 17 September 2011 (UTC)
 * As for the "cracks", I quite agree that the DL solution is not perfect, but these flaws are not fatal. I wasn't paying attention to this side of Wikisource at the time and I am only now able to take a peek into the code. The "cracks" are exactly what I am trying to sort out, since I have gripes with the Layout system too. As you can see, there are several disparate issues to resolve, including, but not limited to:
 * Headers & FOOTER-like banners stuck in the layout
 * Layouts not defaultable, should that be allowed?
 * Page numbers not easily inlineable
 * Layouts are not applicable to any work that is not transcluded, but if we change that, what about the pages we don't want to apply layouts to? Should you have to "opt-in" to them by transcluding a blank page, or opt-out by using a layout-killer template?
 * Some of these issues are one-liner fixes, some are more in-depth. However, they all are governed by the same code: PageNumbers.js, which is not hosted at enWS (though default layouts is a tack-on). If you can think of any others, feel free to tell me about them and I'll take a look-see. Inductiveload— talk/contribs  11:08, 14 September 2011 (UTC)

Universal layouts
that reminds me - redirect pages should probably be excluded from all this if univeral is what is to come -- George Orwell III (talk) 10:39, 11 September 2011 (UTC)
 * Well, no one intentionally lands on a raw redirect page except an editor wanting to make a change, so disabling layouts on redirects is not really useful, IMO. However, disabling on disambiguations or version pages probably would be a good idea, and if you do that, you may as well do the redirects too. That's not a difficult thing to do, as long as the relevant templates have IDs or classes to identify them as layout-killers (you could even do it with a central template, and be able to track pages). Inductiveload— talk/contribs  09:53, 14 September 2011 (UTC)

PotM images may be of interest
Just to let you know that we have a number of images, and many look very GIMP-ready. — billinghurst  sDrewth  11:56, 1 October 2011 (UTC)
 * You might have added that there are quite a few :-p. However, they look interesting and useful, and that would be a very nice image category, so we'll see! Inductiveload— talk/contribs  22:54, 1 October 2011 (UTC)

Bulk mover
Hi,

I finally got around to fixing-up a previously started .djvu Index: from last year and I am in need of that utility that moves Pages: by an offset value that you offered up a few weeks ago.

For Index:Title 3 CFR 2000 Compilation.djvu, I need all the currently existing Pages: moved down by (minus) 13. In other words...
 * Page:Title 3 CFR 2000 Compilation.djvu/14 &#160;becomes&#160; Page:Title 3 CFR 2000 Compilation.djvu/1
 * Page:Title 3 CFR 2000 Compilation.djvu/15 &#160;becomes&#160; Page:Title 3 CFR 2000 Compilation.djvu/2
 * all the way to the old end --> the last move being...


 * Page:Title 3 CFR 2000 Compilation.djvu/496 &#160;becomes&#160; Page:Title 3 CFR 2000 Compilation.djvu/483

If you don't have the time, etc., please touch-back so I can make a formal Bot Request instead. Thank you for any attention in advance. George Orwell III (talk) 22:37, 1 October 2011 (UTC)
 * In progess, at a rate of about 6 pages per minute. Inductiveload— talk/contribs  22:53, 1 October 2011 (UTC)
 * ✅. For anyone interested, the script that did this is at User:Inductiveload/Scripts/Page shifter. Inductiveload— talk/contribs  01:28, 2 October 2011 (UTC)
 * Thank You! everything reflects the updated djvu now. -- George Orwell III (talk) 02:34, 2 October 2011 (UTC)

Shake your 1.18 booty
Do we go back (after announcing) that we are ready to again trial the tools that were waiting for default load aspects, and with relative paths? — billinghurst  sDrewth  06:18, 6 October 2011 (UTC)
 * The default gadget is now on by default for IP editors, not sure about people who haven't enabled it manually since I already set my setting to "off" and default only count if you haven't set it already. Inductiveload— talk/contribs  22:29, 9 October 2011 (UTC)

Bulk mover - Female Prose
Hey again,

Wanted to bring you and your excellent brain in on a recent request & discussion by ResidentScholar concerning a past PotM work, Female Prose Writers of America.

Long story short - turned out the initial Archive.org DjVu file everyone jumped to complete was flawed. When compared to the same edition hosted on HathiTrust, it was determined 5 pages of text were completely missing and the double trailing blank pages per each portrait image page were also trimmed. This made what should have been 480 pages total into only 461 pages - making for 19 possible page insertions at various points throughout the work in order to restore it to the original 1852 published state.

Now I've already gone ahead and fixed-up the DjVu and uploaded it locally for comparison/testing purposes while I wait for ResidentScholar to come around again and let me know how he'd like to move forward with this. Basically, I wanted to bring you into the loop on the matter prior to his reply for your input/thoughts on how doable or realistic a bulk move with a 19 page staggering-offset-throughout-the-work is or should we drop the idea of restoring the double trailing blank pages per portrait image page completely and go with what amounts to just a 3 page staggered-offset move. TIA. -- George Orwell III (talk) 21:48, 8 October 2011 (UTC)
 * Well, the 3 page shift is easy-peasy. The staggered shift is harder, but certainly not impossible. I am happy to do either. In my opinion, the double blank pages are not that important, and I personally wouldn't bother, since they just get a "without text" rating and be ignored, but since they have been put in the DjVu, they may as well stay. Tell me when I can start, and I'll get on it. Inductiveload— talk/contribs  23:48, 9 October 2011 (UTC)
 * I don't make the rules- I just try to follow them. If the card catalog, bibliographic data, etc. sez the damn book is 480 pages then we should have 480 pages (blank or otherwise).
 * Anyway, the replacement file is up and the pagelist now reflects the new condition. The first "run" is for existing djvu pages 363 to 462 to be moved +18 to the end (480). Then 2 blank pages need to be created before continuing to the next section and corrected offset. -- George Orwell III (talk) 02:09, 10 October 2011 (UTC)
 * ✅ Phew, that was a real pig. The WS servers have been crapping out every few page loads and the image caches refuse to be purged, so it's been a mission! It's all done now, however, it just needs the blanks filling in, and the mainspace "pages" tags renumbering. I have done from Elizabeth Wetherell to the end already. Inductiveload— talk/contribs  02:07, 12 October 2011 (UTC)

Manifesto
Hello, this is regarding the Manifesto of the w:Hindustan Socialist Republican Association that was supposedly distributed at the 1929 Lahore session of the Indian National Congress. (The text of the manifesto may be found at the following links:
 * English link 1
 * English link 2
 * Snippets in Hindi)

I am not aware whether the original manifesto was in english or hindi or both.

The text at these links mentions two names, either or both of whom may be (considered) the authors of this text. They are named in the text as:
 * B.C.Vohra
 * Kartar Singh

Per this google books link(go to page 46), the full name of B.C.Vohra is Bhagwati Charan Vohra, who per this page of the Punjab Museums website, died on 28 May, 1930. And per page 6 of this pdf, the Kartar Singh named in the text happens to be w:Kartar Singh Sarabha who died on November 16, 1915.

Now, as far as my knowledge goes after a detailed IRC conversation with Doug, there are two possible cases for its copyright, with two subcases each.

Now, as far as I understand the Indian Copyright Act 1957(current official version, related rules, w:Copyright Law in India), since both (possible) authors died more than 60 years ago; this text should be in the public domain. In this, I'm assuming that the 1957 law applies instead of the 1914 Indian Copyright Act, which itself was a modified version of the British w:Copyright Act 1911(original text of the British law).

This leads me to the conclusion of the last case to be public domain in India. Also in this case, since the US applies the w:Rule of the shorter term, the text should also be public domain in the US. Hence the result:

Now, either we need to determine which case applies or resolve all 4 for possible results. If we are to do the (longer) latter process, I would like to add the following information to this:
 * The manifesto was probably never officially published in British India(since the organisation concerned was a revolutionary one, and probably banned or illegal). I may be wrong in assuming this. Also it might have been officially published for the first time after Indian independence(1947) posthumously. Alternately, it may have been officially published elsewhere(outside India) first.
 * I do not yet know of the 1914 Indian Copyright Act and whether it required registration(which is highly unlikely to have taken place considering the nature of the organisation). Also unknown to me are the effects of the w:Berne Convention, if the 1914 Act indeed applies(whether the convention restores/applies copyright to this text).

Also, all this began with हिन्दुस्तान सोशलिस्ट रिपब्लिकन एसोसिएशन which includes text from the hindi link of the text given above. I'm adding a link at its talk page to this page for clarifications on copyright.

Lastly and most importantly, sorry for such a long message, and thanks for any help in advance.--Siddhartha Ghai (talk) 21:29, 9 October 2011 (UTC)


 * The reference to the rule of the shorter term is technically incorrect and was my fault for misleading Siddhartha Ghai. The relevant rule is that works that were out of copyright in their home country on the date the country acceded to Berne and GATT/URAA are in the public domain in the United States unless they were published in the United States and complied with all technicalities (normally, prior to their home country copyright expiring).--Doug.(talk • contribs) 19:13, 10 October 2011 (UTC)

Djvulibre
Do you know where I can find instructions for for adding pages to a djvu file with Djvulibre? I need to add place-holders for a couple of missing pages. Thanx in advance Misarxist (talk) 10:56, 11 October 2011 (UTC)
 * DjVuLibre Documentation:djvm - I recently used it for exactly this purpose: Bottom line the syntax is   where "pagenum" = the page you want to insert the page before (thus it will become the page number of the inserted page).--Doug.(talk • contribs) 11:15, 11 October 2011 (UTC)

volume error
Your recent upload File:The International Jew - Volume 2.djvu is actually volume 3. I would normally endeavour to resolve the problem myself, but facilitating the dissemination of this sort of material is potentially a criminal offence in Australia. CYGNIS INSIGNIS 01:38, 14 October 2011 (UTC)
 * Are you sure? It appears to be Chapters 21–42, just as expected. Volume 3 is Chapter 43–61. What exactly do you see to be the problem? Inductiveload— talk/contribs  02:09, 14 October 2011 (UTC)
 * None since you fixed the source, did you not imagine that was the problem I saw? CYGNIS INSIGNIS 02:48, 14 October 2011 (UTC)
 * Well, not since you didn't use the words "source" or "link". I thought you meant the file contents were swapped. Inductiveload— talk/contribs  02:51, 14 October 2011 (UTC)

Guiding newcomers to the proofreading interface
I'm thinking about changing the output of small scan link from "scan index" to "scanned pages" in order to make it clearer for newcomers. See the Scriptorium discussion at: It is hard to find the proofreading interface. Any objections? Heyzeuss (talk) 20:54, 14 October 2011 (UTC)
 * Of course not! Feel free to go ahead and change anything. Be WP:BOLD. Your idea is a good one to improve accessibility, which is very important. If you can think of any other ways to make anything clearer for newcomers, feel free to go ahead with them too. We old and crusty Wikisourcers often don't realise what new contributors find difficult, or what the best way to help them is. The best people to make it easy for newcomers are other newcomers! Inductiveload— talk/contribs  19:22, 15 October 2011 (UTC)

One problem I forgot to mention
Hi. There is one important issue which I completely forgot to mention yesterday on my talk page, regarding software update related issues, and that is the failure of the "sortable" tables.

I have a series of four sortable tables, each holding about 320 records Wikisource:WikiProject Popular Science Monthly/Authors A to D|STARTING WITH THIS TABLE. Regardless of whether the definitions of "wikitable" or "prettytable", the sort feature doesn't work properly. Meaning, that the first two columns sort only descending but not ascending, and the last three columns don't work at all. Until yesterday, even the arrow heads didn't show. This table helps me update authors on a volume by volume basis and is much needed. When you have the chance, could you please look at them? Thanks in advance. — Ineuw talk 22:14, 17 October 2011 (UTC)
 * I think I see the problem, and I'm trying to fix it. What is happening is that the tablesorter's cache contains only the first column and the row number. When you sort by the first col, it sorts as you expect, when you sort by the second col it sorts by the original position of the row (which is already alphabetical, so there is the illusion of it working). If you sort by other columns, it breaks because the it doesn't have the data on hand. Inductiveload— talk/contribs  23:53, 17 October 2011 (UTC)
 * I'm an idiot. This problem was caused by the first non-header row has only one cell, so the code thinks the table has only one column, and therefore all other columns cannot be sorted. Adding class="sortbottom" to the first row ("A") just after the "|-" will sort the row to the bottom and allow it to work. Otherwise, inserting a row with the right number of cells above the "A" would do the trick too. Inductiveload— talk/contribs  01:57, 18 October 2011 (UTC)
 * Thank you with my gratitude!!! In biblical terms this would translate into a payment of 10 sheep, some goats (but not my daughter's hand). The shadow of suspicion about the first row did pass my already clouded mind, but since the arrows were also missing at first, I thought that it will be resolved by the higher powers without a special petition. :-) Take care. — Ineuw talk 02:36, 18 October 2011 (UTC)

User:Inductiveload/Sandbox7
There was a small trouble in the syntax of the match — Phe 10:04, 24 October 2011 (UTC)
 * Thanks! That was a silly error! Perhaps the M&S header regex could be a bit more lax with spaces to guard against that? Inductiveload— talk/contribs  03:04, 25 October 2011 (UTC)

Graded German Readers
In case you didn't get my e-mail, http://openlibrary.org/books/OL24276808M/Graded_German_readers now appears to be available. Please get it but parts of it may not be eligible for upload, so we need to talk about it to verify what parts are useable. Thanks.--Doug.(talk • contribs) 06:24, 1 November 2011 (UTC)

←→ that-a-way brace with magic math?
Are you able to do any magic with math like you did with brace2 though turned 90&deg;? I need a downward pointing brace for Page:Problems of Empire.djvu/138. Thanks if you can. — billinghurst  sDrewth  10:04, 7 November 2011 (UTC)
 * if math does not work, could be a possibility to adapt custom rule somehow to be used in tables and make the necessary image to resemble braces? --Mpaa (talk) 12:34, 7 November 2011 (UTC)
 * Horizontal braces are tricky, because the width is very variable, which tends to break up image segments. Custom rule will have a similar problem, but instead of being broken up, it will be too short to make the span. I am thinking of a better javascript-based (+SVG??) solution, but it might take me a while to work out. Feel to free to try alternative solutions while I fiddle, but I personally don't see great mileage in either solution (even for normal braces - it's currently a hack). Inductiveload— talk/contribs  00:15, 8 November 2011 (UTC)
 * Yep. Generally for our works we are working that can be stretched down the length, but not necessarily expanded in the width, the only difficulty with stetching the length is that the centre point can be deformed to wide and ugly. There is Commons:Category:Bracket segments however I am not in a jigsaw mood. — billinghurst  sDrewth  05:09, 8 November 2011 (UTC)
 * Good timing. I've just finished a proof-of-concept Javascript that constructs an image of the the right size based on the cell the brace is in. Add User:Inductiveload/braces.js and then check out User:Inductiveload/Sandbox7.
 * Know issues are:
 * Not sure about the browser compatibility, but it should be IE 8+ at least (which is better than SVG would be). Bloody IE.
 * Sometimes width maths is broken because the addition of the image alters the table cell's width after the image is drawn, so there's a circular dependency
 * Rendering in edit mode is probably broken
 * It could be done by a div full of image segments as required, but I think that that is messy on the output side of things and also needs a lot of HTTP requests to commons which slows things down dreadfully on Commons.
 * Anyway, go and have a look and a play in that sandbox and, I'm heading off for now. Inductiveload— talk/contribs  05:27, 8 November 2011 (UTC)
 * The only proof for your concept is that it is too hard for this sparrow. I somehow got Inkscape to squish P/child's down SVG and it will do for my grade of work until someone invents something that I can use. — billinghurst  sDrewth  12:13, 8 November 2011 (UTC)

how to use query output into javascript
Hi. Following out talk on client-side, I would like to learn how the API output can be used in a javascript (not necessarily to solve that problem, but to learn how that could be done). I struggled but still did not manage to import the data. Could you give me a hint and just write down the (very few?) lines of code needed to get the query result into a javascript variable? Starting from there I can try to continue with my own legs. Thanks --Mpaa (talk) 22:32, 12 November 2011 (UTC)
 * Certainly. I'm not at a computer with a useable Javascript development environment at the moment, I expect I'll be back to it later today. Sorry for the delay. Inductiveload— talk/contribs  15:21, 14 November 2011 (UTC)
 * Thanks. No hurry anyhow. BTW, what are you using as Javascript development environment? Bye --Mpaa (talk) 20:29, 14 November 2011 (UTC)
 * I use Firefox and Firebug. It's a simple combo, but sometimes like today I have only a locked-down computer with an old IE on it.
 * As for the JS used to process an API call to MediaWiki, if you have jQuery (which you will have on WMF sites) you use the getJSON function. The following code looks at a page, gets all the transcluded Page pages, and adds the title of each page to the top of the current HTML document with the first category that Page: page is in:

http://api.jquery.com can tell you the jQuery functions purposes better than I can. The API URL is constructed thusly:


 * http://en.wikisource.org/w/api.php - path to the WS API interface
 * action=query - we wish to run a query on the database
 * titles=Hudibras/Part%201/Canto%201 - title of the page you are interested in
 * generator=templates - we are looking for transcluded pages, which are equivalent to templates for this purpose
 * gtllimit=500 - template generator limit of 500, the maximum for a non-bot user. It is very unlikely that you will exceed this on a single mainspace page, but it is likely you will exceed it if you try to find all the page in an index
 * gtlnamespace=104 - template generator namespace = 104 = Page:
 * prop=categories - we are interested in the categories of the transcluded pages
 * cllimit=500 - category limit = 500, max, etc
 * format=json - return the data in JSON (JS data format), as opposed to XML
 * callback=? - this is needed to allow the program to receive the data and process it

I have used an anonymous functions to process the data to keep it all inline, you can use named functions to keep larger code in order. This is not the simplest JS in the world, but it is not too bad. Just keep building up from small blocks into larger ones. Hope that helps, Inductiveload— talk/contribs  00:48, 15 November 2011 (UTC)
 * Thanks --Mpaa (talk) 17:45, 15 November 2011 (UTC)

PSM vol 43 and 47 bulk text moves
At this juncture, I can only offer my thanks for your efforts, since (in biblical context), I've already given away all my livestock, (but I am keeping Pippa). Thanks for all your help. Also, remaining truthful as always, this note of thanks is almost identical to the one I posted to GO III. The difference is, that there, I didn't mention Pippa. :-) — Ineuw talk 04:10, 15 November 2011 (UTC)
 * My mother taught me that is bad manners to take daughters in payment for service rendered, though she was (happily for me) silent on the matter of livestock. I also expect the lawmakers in my area might have frowned upon the transaction. Thus I think I have managed to obey the rules of my upbringing and of the state. You are quite welcome. If you need any more bot work, feel free to drop a note here or at WS:BOTR (which is now on my watchlist). Inductiveload— talk/contribs  17:39, 17 November 2011 (UTC)

NARA bot issues
I'm going to try something novel and leave you a talk page message instead of bugging you on IRC. :-) Since the change to the image compression, I've been getting occasional errors during the conversion step which kills the bot: . It appears to be repeatable (that is, the same files cause the error when retried), but I can't see any pattern in them. Any ideas? Dominic (talk) 18:19, 17 November 2011 (UTC) Solved!
 * I should probably write up my notes for the other features I've talked about. If you can find time for these, it would be great.
 * Multi-page support: the bot currently runs through a folder in order, generating file names based on the Toolserver output for each file. However, since a multi-page document means there are multiple files associated with the same ID, it just tries to upload them under the same name currently. The new functionality I envision is: On retrieving a file for upload from the designated folder, the script finds its ID and then scans the list for any other files with that same ID. If it finds none, it uploads like as it does currently, but if it finds others, then it uploads them each in sequence, with ", page X" appended to the file name. It also adds a gallery of the other pages using the "Other pages=" parameter.
 * File name correction: the bot currently does hash-based scans for duplicates (for TIFFs) and skips a file if it finds a duplicate. The bot currently has filemover status in order to correct some file names with are a result of (1) some pre-bot manual uploads, (2) some non-standard titles from early in the bot's development, (3) a lot of mistakes where one page of amulti-page document was uploaded as if it were a single page document, and it would have to be moved to the correct ", page X" name for the muti-page function above. For this to work, the script would have to still retrieve the metadata from the Toolserver after detecting a duplicate and scan for other pages based on the ID, then compare the expected file name with the real one, and then move it to the expected name if there is a difference. Since that involves loading another web page each time for a very specific task, it would be nice if this function could just be turned on with an optional argument, rather than being standard.
 * These are just the ramblings of a non-programmer, but hopefully the logic makes sense. No pressure, of course, I just realized that I've talked about this a couple of times before, but hadn't put it down clearly in writing. Dominic (talk) 19:42, 17 November 2011 (UTC)

Must purge page to magnify page scan II . . . continued
Hi. Continuing my previous post of User_talk:Inductiveload, The curiosity to better understand, I am challenged by the mystery as to why this feature "jammed". I profess being largely ignorant of javascript and CSS, but thought that my observation may provide a clue to the problem. - Noticed that the text area on the left of the Page: namespace has been modified and made wider. I am not even sure that this is the case, but while struggling with a centered title being off, I noticed that the left and right margins are unequal (Page view). This was confirmed by the use of the Mioplanet pixel ruler gadget to verify my "educated" guess. P.S: This is absolutely NOT a burning issue. Just "picking" your brain. :-) — Ineuw talk 08:12, 23 November 2011 (UTC)

Scripts
Thanks for putting up your scripts, and I will use your suggestion with AutoHotkey. Heyzeuss (talk) 12:14, 1 December 2011 (UTC)

Columns
Would you take a look at Proposed_deletions, any ideas on next steps towards closure? Jeepday (talk) 14:02, 18 December 2011 (UTC)