User:JVbot

fr:User:JVbot This user account is a bot operated by Jayvdb (talk), and approved by the English Wikisource community.

Patrolling
The bot automatically patrols pages listed on the Whitelist, which may be modified by anyone at present. See the bot patrol log.

Each line should start with a username, followed a list of entries which may be any of the following:
 * Pagename or Special:Prefixindex/ : these two are functionally equivalent, as they permit the page or subpages to be created and modified. The latter is easier on the eyes if the former would be a red link.
 * Author: : any page listed on the author page

WFB Flags
✅ This task is presumed complete but is waiting for the backlog at Category:Speedy deletion requests to be cleared.

Stage 1
As part of an audit of images on Wikisource, all of the CIA World Fact Book flags are now replaceable with higher resolution images uploaded on the commons using the naming convention with the notable exceptions of three flags which used by the WFB which are used only for islands of a sovereign country and for some unexplicable reason are different dimensions to the flag of that country: Image:Flag of New Zealand (islands) (WFB 2004).gif, Image:Flag of Australia (islands) (WFB 2004).gif and Image:Flag of Norway (islands) (WFB 2004).gif.

The exact replacements that will occur can be found and improved on /WFB Flags. The method used is described in /WFB Flags/Method.

Stage 2
After replacing the flag images, the old flags need to be tagged for deletion. In case any flags were not replaced, the bot will be tagging only png files that appear on Special:Unusedimages.

python unusedfiles.py -ext:png \ -tag:'sdelete|A1: all use of this image has been replaced with a higher res image now on commons'
 * Command:

Getting 60 pages from wikisource:en...
 * Sample output:

Image:WFB Flag of Afghanistan.png + +

Do you want to save the changes? (Y/N)

EB1911
This task is to handle the problems outlined at Bot requests.

Once complete, the task specific changes to 1911 Encyclopædia Britannica/Header will be removed.

Problem 2 & 3
To fix these two, the header template will be changed to detect params "article", "nonotes" and any others that are not longer used, and place the articles into a category Category:EB1911 subpages needing header changes (a subcat of Category:Wikisource maintenance). The bot will then run through pages in that category and update the header as follows.


 * 1) param "article" will be removed
 * 2) param "nonotes" will be changed to wikipedia="none" if it doesnt already exist

Pages in the category after the bot has completed will need to be fixed manually.

Problem 5
There are a lot of EB1911 subpages with that have filled the 1911 Encyclopædia Britannica/Header template with a wikipedia param of page which is unnecessary as that param expects just "page". The result was not pretty. A recent change to the EB1911 header has catered for this, hiding the mess, but it would be preferable to clean up the values in this param so that this kludge isnt needed.

To do this, the header will be modified to also categorise all subpages with the incorrect param value into Category:EB1911 subpages with incorrect wikipedia value (a subcat of Category:Wikisource maintenance). The bot will then run through pages in that category and fix the param value.


 * Command:

python replace.py -summary:'fix wikipedia param' -family:wikisource \ -cat:EB1911_subpages_with_incorrect_wikipedia_value -regex \ 'wikipedia ?= ?\[\]*)\|([^\*)\]\] and \[\]*)\|([^\*)\]\]' \ 'wikipedia = \2 | wikipedia2 = \5' \ 'wikipedia ?= ?\[\]*)\|([^\*)\]\]' \ 'wikipedia = \2'

Problem 6
Currently, pages that are three levels deep are using header or header2. Changes have been made to EB1911 to better handle pages that are three levels deep, so now a bot needs to convert to using the EB1911 capabilities. User:Psychless/Temp holds a list of EB1911 pages without a EB1911 header.

python replace.py -links:User:Psychless/Temp -regex '(?ms) {{ (h|H)eader.*title *= *\[\[\.\.\/\|[^/]*\/([^]]*).*previous = *\[\[\.\.\/([^|]*)\|.*next *= \[\[\.\.\/([^|]*)\|[^}]*}}' ' '

Categorise media
Add media into Category:PDF files and Category:OGG files, etc.

Prepare for move to commons
Tag media with PD licenses, commons categories and tag with commons ok

Dead end pages
We have nearly 2000 pages that are Special:Deadendpages, and as a result do not turn up in Special:Statistics. These need a header, or they need an author page.

python replace.py -deadendpages -excepttext:'{{([h|H]eader|no header)' -regex '(?ms)^(.*)$' "{{no header}} \1"

header conversion
A script to convert header into header2:

python replace.py -namespace:0 -summary:'[bot] standardisation: replacing header with header2' \ -cat:'Pages with arrow in previous param' -regex \ '{{[H|h]eader([^}]*override_author[^=]*=[^|]*[A-Za-z][^}]*}})' \ '{{subst:header-layout-override\1' \ '{{[H|h]eader([^}]*)}}' \ '\1' \                  'â  'â'  \ 'section *= *\(([^)]*)\)' 'section = \1' \                  'section *= *(.*)' 'section = \1' \                   '(section[^=]*=[^<}]*)' '\1: '

JCMatoeam
Remove deprecated templates JCMatoeamV1 and JCMatoeamV1, and tag the empty pages with OCR

python replace.py -transcludes:JCMatoeamV1 -regex ' {{JCMatoeamV1[^}]*}}<\/noinclude>'  '{{JCMatoeamV1[^}]*}}' 

python replace.py -transcludes:JCMatoeamV2 -regex ' {{JCMatoeamV2[^}]*}}<\/noinclude>'  '{{JCMatoeamV1[^}]*}}' 

History of Iowa
Move Pages in History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4 into Page: namespace.

python movepages.py -prefixindex:"History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" -prefix:Page:

Remove header from Pages in History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4 into Page: namespace.

python replace.py -prefixindex:"Page:History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" \ -regex '(?ms){{(h|H)eader[^}]*}}' ''

Replace the redirects with dated soft redirect once is happy with the result of the last two stages.

python replace.py -prefixindex:"History of Iowa From the Earliest Times to the Beginning of the Twentieth Century/4/" \ -regex ???

Easton's page name cleanup
There are a number of pages in Special:Prefixindex/Easton's Bible DIctionary (note the wrong capitalisation of DIc). The redirects need to be replaced with dated soft redirects.

python replace.py -regex -prefixindex:"Easton's Bible DIctionary" \ '#REDIRECT \[\[(.*)\]\]' \ '"\1"'

A Course In Miracles
Page move requested at.

python movepages.py -file:pagelist.txt " In " " in " " For " " for "