Hi!
User Details
- User Since
- Oct 25 2014, 1:53 AM (532 w, 2 d)
- Roles
- Administrator
- Availability
- Available
- IRC Nick
- Bawolff
- LDAP User
- Brian Wolff
- MediaWiki User
- Bawolff [ Global Accounts ]
Yesterday
As an aside, perhaps a better solution would be to simply reject adding the group in [[Special:UserRights]] if the user does not have 2FA and 2FA is required for the group (Ideally only check during submit to prevent enumeration). Or perhaps force the user to enable 2FA during their next login.
Fri, Jan 3
Personally i believe that such lists of files to upload when the limit increases should instead be on commons. There is probably thousands of such files, and their url might change in the next decade. Not to mention that this process might not even be the correct process in a decade's time (One certainly hopes in a decade upload by url would be stable enough that the upload would be done directly on commons).
Mon, Dec 30
Would it be helpful to have a document somewhere, writing up requirements and/or best practices on the following considerations ...
Sun, Dec 22
It does look like there are other buttons with a similar issue.
Fri, Dec 20
I made a hacky version of what i mean as a proof of concept (code is just a hack, not a proper implementation).
I'm seeing some weird newlines on the dbconnect page T382566
Anyways, I feel like using BEGIN IMMEDIATE everywhere is not the right approach. I feel like a better thing to do would be:
As soon as we start a transaction with BEGIN IMMEDIATE, the whole database is locked. Other threads can't even start their own transactions because of the IMMEDIATE: it makes SQLite consider the transaction as a write transaction, right from the get go, regardless of whether any writes will be performed. This was done to mitigate other deadlocks in T89180 / T93097.
Tue, Dec 17
[Not sure if security considers this type of DoS in its purview or not. I tend to think of this as more being just an ordinary bug]
One issue here is that $wgPageViewInfoWikimediaRequestLimit (currently 5) might not make sense, because multiple pages could be rendered in a single request (especially in the job queue).
Mon, Dec 9
The main scary part of log_params is when really old code use the old newline serialization method which is kind of unsafe in context. But that is besides the point.
Dec 6 2024
One major difference, is if you use recursiveTagParse() and output the result in a strip marker, the use of a general strip marker means that links still work, where in a nowiki strip marker, i don't think they do.
Yeah, some of those links do look bad. Like if the user ends up on https://www.mediawiki.org/wiki/Category:All_skins/km they aren't going to find what they are looking for
Dec 5 2024
Keep in mind, this sort of thing is much harder than it looks if you need to account for mXSS.
Perhaps wmf teams related to commons can help? With all respect, the lack of maintenence effort on mediawiki file backends is creating an undue burden on volunteers with shell to complete these types of tasks.
Nov 30 2024
As an aside, I just realized you can do a surprising amount of drawing with just CSS. See https://en.wikipedia.org/wiki/Module:Sandbox/Bawolff/canvas for an example.
Nov 28 2024
I think Special:ListFiles is one of the few places where you can get a thumbnail of a file that is not the current version. FlaggedRevs might also be able to make that happen.
For example consider https://commons.wikimedia.org/w/index.php?title=Special:ListFiles&offset=20130105&user=BotMultichillT&ilshowall=1 - all the files marked yes under the "current version" column on that list work, all the ones marked no, do not.
I think its because another user uploaded a new version. Special:ListFiles shows the old version that Bigtime Boy uploaded. However this confuses MediaViewer which assumes that all thumbnails are for the current version.
Nov 27 2024
To give some context - its known that certain upload failures are intermittent. For example if there is a new deploy in the middle of the upload. It would be great if v2c retried at least once before telling users to request a server side upload.
Nov 25 2024
Just fyi, here is a perma-link to the discussion on commons https://commons.wikimedia.org/wiki/Commons:Village_pump/Proposals/Archive/2020/08#RfC%3A_Deprecate_XCF_file_format . It wasn't a popular proposal.
Nov 24 2024
Nov 21 2024
Nov 20 2024
Given its not even possible to set your skin to cologneblue or modern in Special:Preferences without using a secret url parameter, I don't see much point in keeping these skins deployed.
Nov 12 2024
btw, this seems to be the log entry for the 500 the user from discord experienced https://logstash.wikimedia.org/app/discover#/doc/logstash-*/logstash-default-1-7.0.0-1-2024.11.12?id=AL88IJMBLmySI1N_AuNw
I'm pretty sure you're not allowed to link to or even mention the brand names Telegram or Instagram or Google from a sitenotice. you can't from a CentralNotice, I don't see why a different policy should apply here.
Nov 9 2024
That said, it does seem like the p99 for AssembleChunkUpload jobs has spiked to ~15 min for the last 2 hours (was fine before that point), so maybe that is just it. Maybe driven by a spike in ChangeDeletionNotification jobs. Sounds like a dedicated queue as Scott suggests would really help.
@MBH lets open a separate new task to investigate, as the cause could be something different than the job queue thing this task is about. If you want you could email the HAR file to me ( bawolff@gmail.com ).
Right. The point of this bug is it is confusing to have the checkboxes beside the items in the file history section if they do not do anything. They should either be removed or connected to the button.
Nov 8 2024
I mean the circled button "change visibility of selected revisions", not the text link.
Nov 6 2024
Not that button, the "change visibility" button. It only shows up if your user account has the correct rights, which admins might not have by default.
Nov 5 2024
Thank you for tagging this task with good first task for Wikimedia newcomers!
Oct 31 2024
Just as an aside, I believe PublishStashedFile AssembleUploadChunks are considered low traffic job. Unlike normal jobs these are very latency sensitive, as they don't happen in the background, but the UI actually makes users wait well these jobs complete (See also T378276). It would be really great if somehow these jobs can be prioritized in a job queue backlog situation.
This issue is probably caused by T378385.
I don't know what is HAR file, but I am some sort of technically literate user, so maybe I can obtain it if you explain me how to create it.
After briefly looking through the logs, i see a bunch of cases where it looked like it took about 2.5 minutes between the publish job being sent and the job queue picking it up. I'm not sure if that's considered within acceptable time frame, or what normal time frame for something like this is. I don't know if i'm looking at the right files, so I'm not sure if this is what is being complained about.
Seeing some:
HEAD http://ms-fe.svc.codfw.wmnet/wikipedia/commons/thumb/1/1c/Holy_Transfiguration_Armenian_Cathedral%2C_Moscow_52.jpg/320px-Holy_Transfiguration_Armenian_Cathedral%2C_Moscow_52.jpg HTTP/1.1 - NULL cURL error 28: Connection timed out after 1000 milliseconds
It would probably help to know:
Seems like there are multiple complaints https://commons.wikimedia.org/wiki/Commons:Village_pump/Technical#Upload_Wizard_very_slow
Oct 30 2024
Also in the sense that testwiki is in $wgCrossSiteAJAXdomains. You should assume that if you compromise an account on testwiki (e.g. via an XSS) it compromises it on all wikis
The copyright footer is not shown on Special:UserLogin, nor (as far as I can tell) on any other page that has JS disabled;
Oct 29 2024
Huh, seems like that should have already been fixed https://github.com/toolforge/video2commons/issues/207
Oct 27 2024
Oct 23 2024
From security perspective, I guess you can load any file in includes ending in .php, since that is what the MediaWiki namespace is mapped to (and who knows what else in extensions and vendor). Still it seems hard to come up with anything evil.
Ah, i think i overreacted here. Sorry for the panic
If possible, it would help to know what the subject line of any LQT threads from that page were, if they were suspicious [possibly the suspicious subject lines were just being previewed and not saved]
So I'm not very familiar with LQT, so i might be overreacting, but to me that exception looks like what would happen if there was some sort of deserialization vuln happening
At a glance, i think you should assume whatever server this was running under is likely compromised.
Wait, was T179080 never fixed? Kind of sounds a little like that. But im not sure, very likely could be wrong. think that is unrelated
Say the wiki page selects either the SVG or PNG rendering based on size. Say it is a small SVG file and SVG is selected. Now somebody comes along and uploads a 20 MB SVG images on top of the original, small, SVG. That would mean all the pages that reference that SVG file need to be rebuilt even though the aspect ratio did not change. Alternatively, the fetch of the overweight SVG should be turned into a PNG fetch. Maybe page rebuilds are not expensive, but some SVG files are used on a lot of pages.
Re glrx:
I'm not an expert, but I think that change would be localized to Thumbor. If Thumbor is asked to rasterize an SVG file, it can notice the file is small and then serve it directly. If Thumbor sets the MIME type, then I think the img element will display it properly. But it also butchers the current semantics. A URL that formerly always gave a PNG file now might give an SVG file. Some OCR code I use will not take SVG but will take PNG; I use something like {{filepath:foo.svg|800}} to get a PNG. Maybe add something to the URL that requires a PNG or obey HTTP requests that ask only for a PNG MIME type.
Oct 22 2024
If you are effectively saying that an SVG rasterizer yields better results on files which contain JavaScript than client-side rendering of the same file via <img>, please highlight that significant concern in T5593.
there was no robust and up-to-date FLOSS SVG sanitiser that could ensure that the SVGs were safe to display directly in the browser.
Oct 21 2024
Also this seems more like a feature request than a security issue. Maybe this should be made public so a broader group can comment on it.
I feel like safemode would be difficult to use as a security feature. Its not sticky, users would have to manually type in the url of every page. edit: appearently this is a user preference now, which maybe changes things with regards to how much it makes sense as a security feature.
Oct 16 2024
Should i just submit a patch to gerrit? This is on *.wmcloud.org, so its not in the same domain as real sites, and thus XSS isn't that bad.
I am unable to reproduce this on stock hound. It might not be an upstream issue
Oct 15 2024
Payloads works - doubled in the same vulnerable param (etc as mentioned above):
https://codesearch.wmcloud.org/search/?q=poc%3Cscript%3Ealert(window.origin)%3C/script%3E%3Cscript%3Ealert(document.domain)%3C/script%3E&files=asd&excludeFiles=test&repos=test
To me, it looks unlikely a csp policy added at a proxy layer would help, unless it was hash or nonce based (or disabled js entirely) which would normally require application changes, since this is injecting into a valid script tag (as opposed to being an html injection)
Oct 8 2024
The initial discussion was around pages using <translate> on commons, where the translated version was using twice as many checks because it needed to check both english and whatever lang it was translated into.
Oct 6 2024
Sep 30 2024
Sep 13 2024
I think Special:Tags is pretty scalable since the creation of the change_tag_def table.
Sep 12 2024
Sep 10 2024
Just FYI, I started an extension to do something like this https://www.mediawiki.org/wiki/Extension:Hashtags . The extension is not necessarily aimed towards wikimedia. I personally believe that hashtags being ad-hoc (Be bold!) is where their primary value proposition is, but there is a config option in the extension to make it only work with specific hashtags.
Sep 5 2024
There is the deleteTag.php maintenance script which bypasses the 5000 limit, but obviously regular wiki users cannot use it.
Aug 31 2024
The problem is DatabaseUpdater::loadExtensions() does not set Installer::$virtualDomains which is in turn used to construct the loadbalancer during update via installer.
I can confirm that I can reproduce this locally on the web updater, with no special configuration other then enabling OATHAuth
I asked the user to try the command line updater, and it worked, so i guess its specific to web installer. (Its unclear if all the other posts were for the web installer. I assumed they were using command line, but maybe that was a bad assumption by me. I also got the impression that some of them were during normal operation, not update, but maybe i was wrong)
Talking with one of the affected users from discord, I got them to var_dump( \ExtensionRegistry::getInstance()->getAttribute( 'DatabaseVirtualDomains' ) ) at time of making the DB select, and \ExtensionRegistry::getInstance()->getAttribute( 'DatabaseVirtualDomains' ) returned an empty array. Which is kind of weird, it suggests that the issue isn't DB code at all, but extension.json isn't registering the virtual domain
Another potential theory is that somehow the DBLoadBalancerFactoryConfigBuilder service gets constructed prior to all extensions being loaded (thus having the wrong arguments).
Aug 30 2024
If you're interested in making the existing SVG filter more robust, by all means write a patch