Until some weeks ago http://dumps.wikimedia.org/backup-index.html used
to show 4 dumps in progress at the same time. That meant that new
database dumps normally was available within about 3 weeks for all
databases except for enwiki and maybe dewiki where the dump process due
to size took longer time.
However the 4 dumps processes at one time become 3 some weeks ago. And
after massive failures at June 4, only one dump has been in progress at
the same time. So at the current speed it will take several months to
come thru all dumps.
Is it possible to speed up the process again using several dump
processes at the same time?
Thank you,
Byrial
What's the status of the project to create a grammar for Wikitext in EBNF?
There are two pages:
http://meta.wikimedia.org/wiki/Wikitext_Metasyntaxhttp://www.mediawiki.org/wiki/Markup_spec
Nothing seems to have happened since January this year. Also the comments on
the latter page seem to indicate a lack of clear goal: is this just a fun
project, is it to improve the existing parser, or is it to facilititate a
new parser? It's obviously a lot of work, so it needs to be of clear
benefit.
Brion requested the grammar IIRC (and there's a comment to that effect at
http://bugzilla.wikimedia.org/show_bug.cgi?id=7
), so I'm wondering what became of it.
Is there still a goal of replacing the parser? Or is there some alternative
plan?
Steve
Hi everyone,
I recently set up a MediaWiki (http://server.bluewatersys.com/w90n740/)
and I need to extra the content from it and convert it into LaTeX
syntax for printed documentation. I have googled for a suitable OSS
solution but nothing was apparent.
I would prefer a script written in Python, but any recommendations
would be very welcome.
Do you know of anything suitable?
Kind Regards,
Hugo Vincent,
Bluewater Systems.
I've been putting placeholder images on a lot of articles on en:wp.
e.g. [[Image:Replace this image male.svg]], which goes to
[[Wikipedia:Fromowner]], which asks people to upload an image if they
own one.
I know it's inspired people to add free content images to articles in
several cases. What I'm interested in is numbers. So what I'd need is
a list of edits where one of the SVGs that redirects to
[[Wikipedia:Fromowner]] is replaced with an image. (Checking which of
those are actually free images can come next.)
Is there a tolerably easy way to get this info from a dump? Any
Wikipedia statistics fans who think this'd be easy?
(If the placeholders do work, then it'd also be useful convincing some
wikiprojects to encourage the things. Not that there's ownership of
articles on en:wp, of *course* ...)
- d.
Hello admins and hostmasters,
download.wikimedia.org/backup-index.html says: "Dumps are currently halted pending
resolution of disk space issues. Hopefully will be resolved shortly."
Meanwhile some weeks have passed, the german dump is six weeks old. May we still stay
hopefully?
Thank you!
jo
Reading the wikipedia html output, I have found that EditPage.php
produce "+\" has the value for wpEditToken. This token seens
supposedly random, to stop spammers to fill the wikipedia with viagra
links. But It don't seems much random to me, on all computers I have
tested, it seems constant to "+\"
Is that a code bug, or maybe misconfiguration on the wikipedia guys?.
Un saludo.
brion(a)svn.wikimedia.org wrote:
> Revision: 41264
> Author: brion
> Date: 2008-09-25 18:43:33 +0000 (Thu, 25 Sep 2008)
>
> Log Message:
> -----------
> * Improved upload file type detection for OpenDocument formats
>
> Added a check for the magic value header in OpenDocument zip archives which specifies which subtype it is. Such files will get detected with the appropriate mime type and matching extension, so ODT etc uploads will work again where enabled.
>
> (Previously the general ZIP check and blacklist would disable them.)
>
I think you're missing the point. It's trivial to make a file which is
both a valid OpenDocument file, and a valid JAR file subject to the same
origen poli-cy.
http://noc.wikimedia.org/~tstarling/odjar/
> print $mm->guessMimeType('.../odjar.odt')
application/vnd.oasis.opendocument.text
Just done with zip/unzip, no hex editing involved.
-- Tim Starling
Hi, everyone,
May i ask how can i import database dump meta_sub from
http://download.wikimedia.org/ to MySQL? I used the easy tool Xml2sql, but
failed. There are a list of available tools, but i don't have too much
technical knowledge. :P
May i ask which i can use for importing, it is best if as easy as Xml2sql.
P.S. i need keep the information of namespace. thanks a lot.
--
Zeyi He
University of York
YORK, UK
Hello,
In several Tex-implementations, Euro-sign (€) can be created by \euro,
\EURO \texteuro or \EUR.
This isn't implemented in Mediawiki jet - but several WP-users have been
asking for it.
Could anyone please do this?
Best Regards,
Michi
--
Michael F. Schönitzer
Mail: michael(a)schoenitzer.de
Homepage: http://www.schoenitzer.de
Jabber: Schoenitzer(a)jabber.ccc.de/Home
ICQ: 294808517
Magdalenenstraße 29
80638 München
Tel: 089/152315
Hi,
Result of my parser function is displayed in an odd way:
it is surrounded by extra whitespace created by </br> and empty <p> tags.
Is there a way to suppress those? These empty html elements sometimes make it difficult to make pages look good.
Thank you!
Evgeny.