At the same time we remove the "filecommand" setting, since we will
likely not have use of it thanks to PECL fileinfo.
Also the "supported" list for attachment mime types has changed
format, so we can keep track of at least some known file extensions.
Orbited plugin may not work at all anymore, I had no means to try it.
But there's a check whether 'LACONICA' is defined there, which is a
very unlikely thing in the future. So far only tests and scripts have
been migrated consistently, though.
I used this hacky sed-command (run it from your GNU Social root, or change the first grep's path to where it actually lies) to do a rough fix on all ::staticGet calls and rename them to ::getKV
sed -i -s -e '/DataObject::staticGet/I!s/::staticGet/::getKV/Ig' $(grep -R ::staticGet `pwd`/* | grep -v -e '^extlib' | grep -v DataObject:: |grep -v "function staticGet"|cut -d: -f1 |sort |uniq)
If you're applying this, remember to change the Managed_DataObject and Memcached_DataObject function definitions of staticGet to getKV!
This might of course take some getting used to, or modification fo StatusNet plugins, but the result is that all the static calls (to staticGet) are now properly made without breaking PHP Strict Standards. Standards are there to be followed (and they caused some very bad confusion when used with get_called_class)
Reasonably any plugin or code that tests for the definition of 'GNUSOCIAL' or similar will take this change into consideration.
This provides initial infrastructure for decoupling display names from internal canonical names, but continues to have us storing and using the canonical forms.
It should be/become possible to provide mixed-case and underscore-containing names in links, @-mention, !-group, etc, but we don't store those alternate forms generally.
File extensions can also be added to the upload type whitelist; they'll be normalized to types for the actual comparison, so only known extensions will work.
Fix extraction of Atom <content type="text"> and <content type="html">; we were failing to escape plaintext source data to HTML, and doing an extraneous double-deescape on HTML source resulting in breakage of notices containing text that looks like HTML. Only <content type="xhtml"> was working correctly previously.
Fixes for RSS2 content processing: we were failing to load <content:encoded> at all due to using wrong element name, and were applying an extraneous de-escape for <description> rather than the escaping that is required to turn plaintext into HTML. (Per spec, <description> must be plaintext.)
Basic splitting/validation code submitted via http://status.net/wiki/XMPP/JID_validation -- Copyright 2009 Patrick Georgi <patrick@georgi-clan.de> Licensed under ISC-L, which is compatible with everything else that keeps the copyright notice intact.
Added PEAR Net_IDNA package to extlib to handle IDN normalization (also used by Validate's email verifier if present).
* added test suite, supplemented my own test cases with JID validation and normalization test cases from libpurple
* follows XMPP rules for validation of name part
* fixes for normalization with non-ASCII names
* will do domain checks if $config['email']['check_domain'] is on, checking for an XMPP-server SRV record or any lookup. (We don't actually need to ping those direct though.)
* some more obscure stringprep validation rules aren't quite followed yet, but we err on the side of permissiveness.
* we still don't actually let you save your address with a resource on it, as we strip resources when looking up users who've sent us presence or message updates. I would recommend saving the outgoing resource as a separate field if/when we add that..?
First steps to parsing RSS items as activities. RSS feeds don't seem
to have enough data to make good remote profiles, but this may work
with some "hints".
We've been making pretty crummy tag: URIs for a while. We should
continue to favor HTTP URIs, since it's nice to be able to discover
things about an object you've shared the ID of. Where that's not
possible, this makes nicer tag URIs.
URLs with paths followed by a double-quote character are incorrectly including the quote in the URL. The double-quote character is in fact not a legal URL char and must be URL-escaped; more importantly it just causes oddities when you quote a message ending in a URL -- such as when using the redent-button experimental feature.
After removing 103 false positives, this leaves 4 actually broken tests, showing two failure modes for mail links:
* 'mail without mailto' formatting shortcut in common_linkify didn't get the 'title' attribute added with the other URLs
* links including mailto: protocol are being incorrectly expanded to http: protocol in the long URL
Canon urls that have a protocol followed by a host (and no path) automatcally get a trailing slash by the canon function - make the unit test match that